Upload 4GB Files to MinIO with mc cp: Fix Timeout Errors
Fix 'context deadline exceeded' when uploading large files (~4GB) to MinIO using mc cp. Tune multipart part-size (64-128MiB), parallelism, proxy timeouts (Nginx 30m), and server configs for reliable uploads.
How to upload large files (~4GB) to MinIO using mc cp from the terminal without ‘context deadline exceeded’ error during multipart upload? What timeout, part-size, and server/proxy configurations are required for successful large file uploads?
Use tuned multipart settings and raised proxy/server timeouts when you upload ~4GB files to MinIO: increase part size (aim 64–128MiB), keep multipart enabled, limit client concurrency, and extend reverse‑proxy timeouts to minutes to avoid the “context deadline exceeded” error during multipart upload. Prefer mc put when you need explicit --part-size control (or run mc cp with --max-workers and --limit-upload); also apply Nginx proxy settings like proxy_read_timeout 30m and client_max_body_size 16G. Examples and exact values below.
Contents
- How to upload large files to MinIO with mc cp
- Why “context deadline exceeded” happens during multipart upload
- Part-size and parallelism — mc put vs mc cp options
- Server and proxy configuration (Nginx example)
- Commands and examples (mc put and mc cp)
- Troubleshooting & monitoring steps
- Sources
- Conclusion
How to upload large files to MinIO with mc cp
Short checklist (do these in order):
- Authenticate your client:
mc alias set <name> <URL> <ACCESSKEY> <SECRETKEY>. - Don’t disable multipart for large objects — keep multipart enabled for files ~4GB.
- Prefer explicit part-size control: use
mc putwith--part-size(if you need deterministic part counts). See themc cpdocs for flags you can tune: mc cp docs. - Pick a part size that yields a modest number of parts (aim for 32–128 parts for 4GB), moderate parallelism (4–8), and a safe upload rate cap (
--limit-upload) so the network/proxy doesn’t drop connections. - Raise reverse‑proxy (and any load‑balancer) timeouts to minutes and increase
client_max_body_sizeso the proxy doesn’t reject or close long uploads.
That’s the short plan — below I’ll explain why, show exact commands, and include an Nginx example.
Why “context deadline exceeded” happens during multipart upload
What does that error mean? It’s a Go runtime/net/http/gRPC style error: an HTTP request or RPC exceeded the client or server deadline and was canceled. For multipart uploads the failure commonly appears at two points:
- During part uploads when network or server is slow, and a proxy or client-side deadline kills the request.
- During the multipart completion phase, when the server is merging many uploaded parts and the final CompleteMultipartUpload call stalls (this is documented in MinIO issues).
Real-world reports show throughput can decay dramatically when the number of parts gets very large (hundreds or thousands), and completion can block long enough to trigger deadline errors. See the MinIO discussion and issues that describe throughput drops for many small parts and completion-time blocking when part counts grow large: https://github.com/minio/minio/issues/7206 and https://github.com/minio/minio/issues/3223. In short: too many tiny parts + default timeouts = higher chance of “context deadline exceeded.”
Part-size and parallelism — mc put vs mc cp options
Two important knobs: part size (how many bytes per multipart part) and concurrency (how many parts uploaded in parallel).
- mc put supports explicit
--part-size(-s) and--parallel(-P) options so you can control the part size and parallel uploads; default--part-sizeis 16MiB. See themc putreference: https://min.io/docs/minio/linux/reference/minio-mc/mc-put.html. - mc cp exposes concurrency controls such as
--max-workers(threads) and rate caps such as--limit-upload. It also has--disable-multipart(only for small files — do not use for 4GB unless you know your server supports single PUTs that large). Seemc cpflags: https://min.io/docs/minio/linux/reference/minio-mc/mc-cp.html.
How to pick numbers (practical guidance):
- Aim for part sizes that produce a small-to-moderate number of parts. For 4 GiB:
- 16 MiB parts → 4 GiB / 16 MiB = 256 parts (ok, but higher overhead)
- 64 MiB parts → 4096 MiB / 64 MiB = 64 parts (better)
- 128 MiB parts → 32 parts (even better for fewer completion operations)
- The GitHub reports show performance improves when part size is increased for large uploads (fewer parts → less server-side overhead): https://github.com/minio/minio/issues/7206.
Recommended starting values for ~4GB:
--part-size 64MiBor--part-size 128MiB--parallel 4–8(formc put) or--max-workers 4–8(formc cp)--limit-uploadset to a sensible cap (e.g.,--limit-upload 500Mor1G) if you share the network or have intermediate proxies that choke on bursts
Don’t use --disable-multipart for 4GB unless you explicitly tested that single-PUT transfers of that size work through every proxy and gateway on the path (most setups use multipart by default because it’s safer and resumable).
Server and proxy configuration (Nginx example)
Reverse‑proxies commonly cause deadline errors because they buffer requests or have short idle timeouts. If you front MinIO with Nginx, use these settings (example adapted from community guidance):
client_max_body_size 16384M;
proxy_buffering off;
proxy_request_buffering off;
proxy_connect_timeout 30m;
proxy_read_timeout 30m;
proxy_send_timeout 30m;
Why each matters:
- client_max_body_size: raises allowed request body size so the proxy doesn’t reject the stream.
- proxy_buffering / proxy_request_buffering off: stream upload directly to upstream; avoid Nginx trying to buffer a multi-gigabyte request to disk.
- proxy_*_timeout (connect/read/send): increase to minutes so long uploads or a long server-side completion phase aren’t cut off prematurely.
The community thread that discusses these exact Nginx settings is here: https://stackoverflow.com/questions/78724154/minio-file-upload-freeze-for-files-larger-than-100mb-with-nginx-on-raspberry-pi. Also remember: any load‑balancer or cloud ingress (ALB/NLB, API gateway) has its own idle/connection timeout — raise that too.
On the MinIO side, keep an eye on server CPU/disk performance when merging parts; very small parts create lots of I/O and temporary objects, causing the final completion request to take longer (see https://github.com/minio/minio/issues/3223).
Commands and examples (mc put and mc cp)
Assuming you already set an alias named myminio:
- Set alias (example)
mc alias set myminio https://minio.example.com ACCESSKEY SECRETKEY
- Recommended: use mc put with explicit part size
mc put ~/bigfile-4GB.bin myminio/mybucket/ --part-size 128MiB --parallel 8
- This creates ~32 parts for a 4GiB file, reducing part bookkeeping and the chance of completion delays.
- If you must use mc cp, tune workers and upload limit
mc cp --max-workers 8 --limit-upload 1G ~/bigfile-4GB.bin myminio/mybucket/
- Keep multipart enabled (don’t add
--disable-multipart). - Reduce
--max-workersif the server or proxy struggles; lower parallelism can often be more stable than pushing many simultaneous parts.
- If uploads still fail during completion:
- Try increasing part size to 256MiB (fewer parts).
- Temporarily reduce
--parallel/--max-workersto 4 to give the server breathing room.
Compute part count quickly:
- 4 GiB = 4096 MiB
- Parts at 64MiB → 4096 / 64 = 64 parts
- Parts at 128MiB → 4096 / 128 = 32 parts
Troubleshooting & monitoring steps
If you still hit “context deadline exceeded”:
- Check MinIO server logs at the time of the failed CompleteMultipartUpload — look for delays or errors while merging parts.
- Check proxy/nginx logs and any load‑balancer timeouts. If the proxy closed the connection you’ll see timeouts there.
- Reduce parts (increase
--part-size) and retry; many reports show this fixes throughput collapse and completion-timeouts: https://github.com/minio/minio/issues/7206. - Keep
--disable-multipartoff for 4GB unless you’re absolutely sure the entire chain supports single PUTs larger than the object (S3 single PUT has its own limits). - Try an alternative client to isolate client-side vs server-side problems (some users tried AWS CLI or rclone as a diagnostic path — see https://stackoverflow.com/questions/79866905/how-to-put-a-large-files-in-minio-using-terminal and https://github.com/minio/minio/issues/9608).
- If completion stalls repeatedly even with few parts, consider upgrading MinIO — there have been fixes and discussions about completion latency in the project history: https://github.com/minio/minio/issues/3223 and https://github.com/minio/minio/discussions/17405.
A quick sanity test: upload the same file to MinIO on the same LAN (no proxy) — if that works, the issue is almost certainly your proxy/load‑balancer or an intermediate network device.
Sources
- mc cp — MinIO Object Storage for Linux
- mc put — MinIO Object Storage for Linux
- Multipart Upload Throughput Approaches Zero For Large Files (GitHub issue)
- Multi-part upload completion very slow and eventually timed out (GitHub issue)
- How to put a large files in minio using terminal — Stack Overflow
- Minio gateway s3 multipartupload failing with rclone copy (GitHub issue)
- Minio file upload freeze for files larger than 100MB with nginx on raspberry pi — Stack Overflow
- Minio multipart upload API did not merge parts in server when completed (GitHub discussion)
Conclusion
In short: tune multipart part size upward (64–128MiB), moderate concurrency (--parallel / --max-workers ≈ 4–8), cap upload rate if needed, and raise proxy/server timeouts (Nginx: proxy_*_timeout ≈ 30m, large client_max_body_size). Use mc put when you want explicit --part-size, and keep multipart enabled to avoid “context deadline exceeded” during multipart upload. With those settings MinIO mc uploads of ~4GB become reliable in most environments.