DevOps

Fix XAmzContentSHA256Mismatch Minio s3v4 Django

Resolve XAmzContentSHA256Mismatch when uploading multipart/form-data to Minio via django-storages S3Storage with s3v4. Upgrade Minio, use boto3 config like request_checksum_calculation='when_required', or fallback to signature_version='s3'. Includes debugging steps.

1 answer 1 view

XAmzContentSHA256Mismatch when uploading multipart/form-data to Minio using django-storages S3Storage with s3v4 — how to fix?

I’m using django-storages’ S3Storage backend with a self-hosted Minio instance. My STORAGES settings are:

py
STORAGES = {
 "default": {
 "BACKEND": "storages.backends.s3.S3Storage",
 "OPTIONS": {
 "endpoint_url": os.getenv("MINIO_ENDPOINT_URL"),
 "access_key": os.getenv("MINIO_ACCESS_KEY"),
 "secret_key": os.getenv("MINIO_SECRET_KEY"),
 "bucket_name": os.getenv("MINIO_BUCKET_NAME"),
 "region_name": os.getenv("MINIO_REGION_NAME"),
 "signature_version": os.getenv("MINIO_SIGNATURE_VERSION", "s3v4"),
 },
 },
 "staticfiles": {
 "BACKEND": "django.contrib.staticfiles.storage.StaticFilesStorage",
 },
}

Context and observed behavior:

  • Region: eu-central-1 (which supports s3v4)
  • Endpoint, keys and bucket are correct and read from .env
  • GET/fetch works fine (files are accessible and generated signed URLs work)
  • Upload (POST multipart/form-data via Swagger UI) fails with this error:
ClientError at /files/uploaded-files/
An error occurred (XAmzContentSHA256Mismatch) when calling the PutObject operation: The provided 'x-amz-content-sha256' header does not match what was computed.

If I switch signature_version to "s3", both upload and fetch work, but I prefer s3v4 (more secure). What could cause the XAmzContentSHA256Mismatch when uploading multipart/form-data with s3v4 to Minio? Could this be a Minio configuration/version issue, a django-storages setting, middleware or WSGI/server altering the request body, chunked transfer encoding, or something else? What configuration changes or debugging steps will allow multipart/form-data uploads to succeed with s3v4?

XAmzContentSHA256Mismatch means the SHA‑256 payload hash the client signed (x-amz-content-sha256) doesn’t match what the server computed; with Minio + s3v4 that mismatch is most often caused by either a Minio multipart/checksum bug or a client-side signing/streaming issue (boto3/botocore or how Django hands the file to the SDK). Quick fixes: upgrade Minio, ensure your code sends the exact bytes the SDK signs (seek the UploadedFile, avoid middleware that mutates the stream), or configure the boto3 client to relax/avoid full‑object checksum calculation; falling back to signature_version=‘s3’ is a workable short-term workaround.


Contents


What XAmzContentSHA256Mismatch means

The S3v4 signing flow includes a payload hash header (x-amz-content-sha256). On each PUT/POST the client sends a value that represents the exact bytes it will upload; the server recomputes the SHA‑256 over the bytes it actually received and rejects the request if the two values differ. That rejection is surfaced as XAmzContentSHA256Mismatch: the header didn’t match the body the server saw.

Two common failure modes produce that mismatch: the client signed one payload but uploaded different bytes (file pointer at end, pre-read stream, or compression/proxy changed the data), or the server and client disagree about how checksums are applied (for example when streaming/chunked signing or multipart logic is buggy). You’ll see this when GETs succeed but PUTs fail with s3v4 because downloads don’t exercise payload hashing in the same way.


Why XAmzContentSHA256Mismatch happens with Minio + django-storages + s3v4

Here are the concrete root causes that match the error pattern you described:

  • Minio multipart/checksum bug (server-side). Several reports show Minio mishandling full‑object SHA‑256 for multipart/streamed uploads when the client uses s3v4; the server computes a different checksum and rejects the upload. See the AWS re:Post summary describing this behavior and recommending an upgrade or workaround. (AWS re:Post)

  • Client library regressions (boto3 / botocore). Newer versions of boto3/botocore changed how checksums/payload signing are handled; multiple users reported PutObject failures and worked around it by downgrading or changing client options. (boto3 issue)

  • Middleware / WSGI / proxy altering the request body. If Django or a middleware reads or mutates the uploaded file (or you accidentally read request.FILES before handing the object to storage and don’t reset the file pointer), the bytes sent to Minio will differ from what the SDK signed.

  • Streaming / chunked transfer mode vs server support. S3v4 supports streaming signatures (STREAMING‑AWS4‑HMAC‑SHA256‑PAYLOAD). If the SDK switches to a streaming signature or chunked upload and Minio’s codepath doesn’t match the checksum logic, you get mismatches. Similar symptoms are discussed in SDK issue threads showing differing ClientComputedContentSHA256 vs S3ComputedContentSHA256. (aws-sdk-js issue)

  • Multipart upload semantics vs HTTP multipart/form-data confusion. Browser multipart/form-data is the HTTP upload encoding to your Django app; S3 multipart upload is a separate protocol between client and S3/Minio. Large files may trigger S3 multipart flows on the client side (or the SDK), exposing server-side multipart checksumming bugs. Community reports (s3fs, fsspec) show the error across different S3-compatible providers, reinforcing that both client and server can be responsible. (s3fs issue)


Quick workarounds (fast)

  • Upgrade Minio to the latest stable release first. Many reports indicate the server-side bug was fixed in recent Minio versions; upgrading often resolves the mismatch immediately. If you run Minio in Docker: pull the latest image and restart the container.

  • Use signature_version = “s3” temporarily. You already observed PUTs succeed with “s3”; that avoids the s3v4 checksum codepath. Not ideal long-term (s3 is older/less secure) but useful for unblocking.

  • Downgrade boto3 as a temporary client-side workaround. Some users rolled back to earlier boto3/botocore versions and regained working uploads while tracking fixes. See the boto3 issue thread for examples. (boto3 issue)

  • Avoid triggering S3 multipart on the client: force single-part uploads for testing (smaller files, or raise multipart_threshold) to check whether multipart upload is the trigger.


Recommended permanent fixes (upgrade Minio, boto3 config)

  1. Upgrade Minio (first thing to try)
  • Upgrade to the most recent stable Minio release and re-test. If you run Minio via Docker compose: update the image tag, docker-compose pull, and restart. After upgrading, retry your Django upload flow — many users reported the mismatch disappears.
  1. Configure boto3 to change checksum behavior (server workaround alternative)
  • Recent client libraries allow relaxing/deferring full-object checksum calculation. The open-webui community fixed a similar issue by adding these options when creating the S3 client (you’ll need sufficiently new boto3/botocore that understands these flags):
py
from botocore.config import Config
import boto3

cfg = Config(
 signature_version='s3v4',
 request_checksum_calculation='when_required',
 response_checksum_validation='when_required',
)

s3 = boto3.client(
 's3',
 endpoint_url='https://minio.example:9000',
 aws_access_key_id='AKIA...',
 aws_secret_access_key='secret',
 region_name='eu-central-1',
 config=cfg,
)

# simple put to test
with open('smallfile.bin', 'rb') as f:
 s3.put_object(Bucket='my-bucket', Key='test.bin', Body=f)
  • That exact pattern was reported to resolve the error in real projects (see the open-webui issue). (open-webui issue)
  1. Ensure you’re sending the exact bytes the SDK signs
  • In Django, always reset the file pointer before handing the file to storage:
py
uploaded = request.FILES['file']
uploaded.seek(0) # ensure the SDK reads from the start
default_storage.save(name, uploaded)
  • If you read the file earlier (validation, hashing, virus scan), open/seek it again or use a fresh file-like object. Memory-backed upload objects (InMemoryUploadedFile) vs TemporaryUploadedFile behave differently — if the file object lacks a reliable len the SDK may stream in chunked mode and trigger different signing behavior.
  1. Force single‑part uploads while debugging
  • Use boto3.s3.transfer.TransferConfig to raise multipart_threshold so your test uploads don’t use multipart S3 semantics (helps identify whether multipart triggers the bug):
py
from boto3.s3.transfer import TransferConfig

transfer_config = TransferConfig(multipart_threshold=100 * 1024 * 1024) # 100 MB
s3.upload_file('smallfile.bin', 'my-bucket', 'key', Config=transfer_config)
  1. If django-storages doesn’t accept a botocore Config via OPTIONS, subclass the storage (example pattern)
  • If your version of django-storages won’t pass a botocore Config through, create a small custom storage that builds the boto3 client with the Config you want and uses it for uploads. The exact internals vary by django‑storages version—adjust the override to match your backend.

Debugging checklist & reproducible tests

Run these in order to isolate the root cause:

  1. Reproduce with a minimal Python script (boto3 client with same endpoint/creds). If the standalone script fails, the issue is client/server (not Django). Use the sample above.

  2. Try small vs large files. Does a tiny file succeed while a big file fails? If only large files fail, multipart S3 protocol is suspect.

  3. Enable detailed client logging:

py
import logging, boto3
logging.basicConfig(level=logging.DEBUG)
boto3.set_stream_logger('botocore', level='DEBUG')

Inspect logs for payload hash values and signing details.

  1. Use AWS CLI with --debug to reproduce and capture request/response:
    aws --endpoint-url https://minio.example:9000 s3 cp smallfile.bin s3://my-bucket/key --debug

  2. Inspect Minio server logs (docker logs / journalctl). Look for lines showing computed vs client-provided checksum.

  3. Check for middleware/proxies that may alter the bytes:

  • Disable gzip/compression middleware temporarily.
  • If you have a reverse proxy (nginx) between Django and Minio, confirm it isn’t modifying request bodies.
  1. Validate the file object passed from Django:
  • Confirm you call uploaded.seek(0) before save.
  • If you pass file.read() into storage, ensure the bytes used for upload are identical to what you hashed earlier.
  1. Try the boto3 Config workaround (request_checksum_calculation/response_checksum_validation). If it fixes the issue, you know the problem is in payload-checksumming codepaths.

  2. If all else fails, upgrade/downgrade Minio and boto3 one at a time to identify the regression window. Community threads show both server and client regressions have caused this. (boto3 issue, AWS re:Post)


Django‑storages configuration and code examples

Your current STORAGES settings look fine for endpoint/credentials. Add these practical checks and changes:

  • Make the SDK behaviour explicit in settings (if your django‑storages version supports it):
py
# settings.py (concept)
AWS_ACCESS_KEY_ID = os.getenv("MINIO_ACCESS_KEY")
AWS_SECRET_ACCESS_KEY = os.getenv("MINIO_SECRET_KEY")
AWS_STORAGE_BUCKET_NAME = os.getenv("MINIO_BUCKET_NAME")
AWS_S3_REGION_NAME = os.getenv("MINIO_REGION_NAME", "eu-central-1")
AWS_S3_ENDPOINT_URL = os.getenv("MINIO_ENDPOINT_URL")
AWS_S3_SIGNATURE_VERSION = "s3v4"
  • When processing an incoming multipart/form-data request, always reset the file pointer:
py
def post(self, request):
 f = request.FILES['file']
 f.seek(0)
 default_storage.save(f.name, f)
  • If you need to inject a botocore.Config, either upgrade django-storages to a version that lets you pass a Config object through OPTIONS, or subclass the storage backend to build a boto3 client with the Config (see the snippet in “Recommended permanent fixes”).

  • For immediate testing, write a standalone script that uses the same endpoint and Config and call put_object/ upload_fileobj — that removes Django variables from the equation.


FAQ

Q — Why does signature_version=‘s3’ work but s3v4 fails?
A — SigV2 (‘s3’) avoids the s3v4 payload hashing codepath that’s triggering the server-side checksum validation. That’s why switching to ‘s3’ often bypasses the mismatch; it doesn’t fix the underlying checksum disagreement.

Q — Is this definitely a Minio bug?
A — Not always. There are reports implicating Minio multipart handling and also reports implicating client (boto3/botocore) regressions. Try the client-only reproduction first; if a minimal boto3 script fails against the server, check Minio logs and upgrade Minio.

Q — Could Django middleware be the culprit?
A — Yes. If any middleware or code reads/modifies the file-like before upload and you don’t reset the pointer, the bytes uploaded can differ from what the client signed — causing the mismatch.


Sources


Conclusion

XAmzContentSHA256Mismatch during multipart/form-data uploads to Minio with s3v4 usually comes down to either a Minio multipart/checksum server bug or a client/streaming mismatch (boto3 behavior or altered file stream). Start by upgrading Minio and reproducing the upload with a minimal boto3 script; if you can’t upgrade immediately, use the boto3 config workaround (request_checksum_calculation/response_checksum_validation) or temporarily fall back to signature_version=‘s3’ while you fix the root cause. Finally, make sure your Django handler doesn’t alter the uploaded file (call file.seek(0) before save) so the bytes the SDK signs are exactly the bytes that reach Minio.

Authors
Verified by moderation
Moderation
Fix XAmzContentSHA256Mismatch Minio s3v4 Django