client_max_body_size has no effect in Nginx proxy with Docker. Receiving 413 Payload too large
I’m using JWilder Nginx-proxy with Docker and encountering a “413 Payload too large” error when trying to upload files. Despite setting client_max_body_size in my Nginx configuration, the setting doesn’t seem to have any effect.
My Setup:
- Nginx proxy (JWilder/nginx-proxy) with Docker
- Django & Gunicorn as the application
- Cloudflare as the CDN
What I’ve Tried:
- Added
client_max_body_size 500m;to the server block in my Nginx configuration - Added
client_max_body_size 500m;to the location block - Added
client_max_body_size 500m;to the http section in the main Nginx configuration - Restarted the container, removed it, removed the image, and rebuilt it
- Increased file upload limits in Django & Gunicorn
My Nginx Configuration:
Main Configuration (http section):
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Server Configuration:
server {
server_name mysite.io;
access_log /var/log/nginx/access.log vhost;
http2 on;
listen 443 ssl;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/swyve.io.crt;
ssl_certificate_key /etc/nginx/certs/swyve.io.key;
set $sts_header "";
if ($https) {
set $sts_header "max-age=31536000";
}
add_header Strict-Transport-Security $sts_header always;
include /etc/nginx/vhost.d/mysite.io;
location / {
proxy_pass http://swyve.io;
set $upstream_keepalive true;
include /etc/nginx/vhost.d/mysite.io_location;
}
}
Virtual Host Configuration (/etc/nginx/vhost.d/mysite.io):
client_max_body_size 500m;
location /static/ {
alias /app/staticfiles/;
expires 1y;
add_header Cache-Control "public, immutable";
}
Location Configuration (/etc/nginx/vhost.d/mysite.io_location):
client_max_body_size 500m;
proxy_request_buffering off;
proxy_buffering off;
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
send_timeout 600s;
Additional Information:
I noticed in the response headers a Cloudflare identifier: “cf-ray: 999aa8574833c251-VIE”. Could Cloudflare be limiting file uploads to 100MB on their free plan?
Workaround Found:
I created a new subdomain (upload.mysite.com) specifically for file uploads that bypasses Cloudflare. This resolved the issue, but I had to update my Nginx configuration to support the new subdomain.
Question:
Why isn’t client_max_body_size working in my Nginx proxy configuration, and how can I properly configure it to allow larger file uploads while using Cloudflare?
The issue with client_max_body_size not working in your Nginx proxy configuration is likely due to multiple factors, with Cloudflare being the primary culprit based on your setup. The 413 error occurs when incoming file requests exceed configured size limits, but the problem typically isn’t with Nginx itself when using the JWilder nginx-proxy.
Contents
- Cloudflare as the Primary Issue
- JWilder nginx-proxy Configuration
- Proper Nginx Configuration Methods
- Complete Solution Guide
- Alternative Approaches
- Verification Steps
Cloudflare as the Primary Issue
The presence of the cf-ray header in your response confirms that Cloudflare is intercepting your requests before they reach your Nginx server. Cloudflare’s free plan has a strict 100MB file upload limit, and this is likely what’s causing your 413 errors. According to Cloudflare documentation, they inspect and block requests that exceed their size limits before forwarding them to your origin server.
This explains why your Nginx configuration changes had no effect - the requests are never reaching your Nginx proxy. If Cloudflare’s limit is 100MB and you’re trying to upload files larger than that, Cloudflare will respond with a 413 error regardless of your Nginx settings.
JWilder nginx-proxy Configuration
The JWilder nginx-proxy image has a default upload limit of 2MB, which is much lower than your requirements. This default limit takes precedence unless explicitly overridden. The proxy uses dynamic configuration generation based on environment variables and mounted configuration files.
When you manually add client_max_body_size to your configuration files, these settings may be getting overwritten by the nginx-proxy’s dynamic configuration system. The proxy generates configuration files at runtime based on:
- Environment variables set on the container
- Docker labels on your application containers
- Mounted configuration files
Proper Nginx Configuration Methods
Method 1: Environment Variables (Recommended)
The most reliable way to set upload limits with JWilder nginx-proxy is through environment variables. In your docker-compose.yml for the proxy service:
services:
proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
environment:
CLIENT_MAX_BODY_SIZE: "500M"
GLOBAL_MAX_BODY_SIZE: "500M"
Method 2: Mount Configuration File
Create a custom configuration file and mount it to the proxy:
# Create client_max_body_size.conf
echo "client_max_body_size 500M;" > client_max_body_size.conf
# Run the container with the config mounted
docker run -d \
--name nginx-proxy \
-v /var/run/docker.sock:/tmp/docker.sock \
-v /path/to/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf:ro \
-p 80:80 -p 443:443 \
jwilder/nginx-proxy
Method 3: Docker Labels
Set labels on your application container to configure per-host limits:
services:
myapp:
image: myapp
labels:
- "client_max_body_size=500M"
Complete Solution Guide
Step 1: Verify Cloudflare Limits
Check your Cloudflare plan limits:
- Free plan: 100MB maximum file size
- Pro plan: 100MB maximum file size
- Business plan: 500MB maximum file size
- Enterprise plan: Custom limits
If you need to upload files larger than 100MB, you’ll need to either upgrade your Cloudflare plan or bypass it for upload endpoints.
Step 2: Configure nginx-proxy Properly
Modify your nginx-proxy configuration using environment variables:
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
environment:
CLIENT_MAX_BODY_SIZE: "500M"
GLOBAL_MAX_BODY_SIZE: "500M"
# Additional recommended settings
PROXY_BUFFERING: "off"
PROXY_REQUEST_BUFFERING: "off"
Step 3: Configure Specific Host Limits
If you need different limits for different hosts, use Docker labels:
services:
myapp:
image: myapp
labels:
- "VIRTUAL_HOST=mysite.io"
- "VIRTUAL_PORT=8000"
- "client_max_body_size=500M"
Step 4: Test Configuration
After applying changes:
- Restart the nginx-proxy container:
docker restart nginx-proxy - Check the generated configuration:
docker exec nginx-proxy cat /etc/nginx/conf.d/default.conf - Verify the
client_max_body_sizesetting is present
Alternative Approaches
Cloudflare Bypass for Uploads
Your workaround of creating a subdomain that bypasses Cloudflare is the most practical solution for large file uploads. Configure it as follows:
-
DNS Configuration:
- Create a CNAME record for
upload.mysite.iopointing directly to your server IP - Remove the Cloudflare proxy (orange cloud) for this subdomain
- Create a CNAME record for
-
nginx-proxy Configuration:
yamlservices: nginx-proxy: # ... existing configuration environment: CLIENT_MAX_BODY_SIZE: "500M" GLOBAL_MAX_BODY_SIZE: "500M" upload-app: image: myapp labels: - "VIRTUAL_HOST=upload.mysite.io" - "VIRTUAL_PORT=8000" - "client_max_body_size=1G" # Even larger limit for uploads
Cloudflare Rules
If you need to keep Cloudflare for uploads but increase limits:
- Upgrade to Business/Enterprise plan
- Custom Rules: Set up edge rules to increase request size limits
- Origin Rules: Configure origin request size settings
Verification Steps
To ensure your Nginx configuration is working correctly:
-
Check nginx-proxy logs:
bashdocker logs nginx-proxy | grep "client_max_body_size" -
Verify generated configuration:
bashdocker exec nginx-proxy nginx -T -
Test file upload without Cloudflare:
- Temporarily disable Cloudflare proxy for your domain
- Attempt file upload
- If successful, Cloudflare was definitely the issue
-
Check request headers:
bashcurl -I -X POST --data-binary "@large_file.zip" http://mysite.io/upload
The key takeaway is that Cloudflare’s default limits are likely the root cause of your 413 errors. While properly configuring client_max_body_size in nginx-proxy is important, you’ll need to address the Cloudflare limitation either through bypass, upgrade, or custom rules to successfully handle larger file uploads.
Sources
- Nginx: 413 - Request Entity Too Large Error and Solution - nixCraft
- nginx - client_max_body_size has no effect - Stack Overflow
- How can I change the docker jwilder/nginx-proxy upload limits? - Stack Overflow
- docker-compose.yml and client_max_body_size · Issue #690 · nginx-proxy/nginx-proxy
- docker push error “413 Request Entity Too Large” - Stack Overflow
- Error: 413 “Request Entity Too Large” in Nginx with “client_max_body_size” - Medium
Conclusion
- Cloudflare’s 100MB limit is the primary barrier to your file uploads, not Nginx configuration
- Use environment variables (
CLIENT_MAX_BODY_SIZE,GLOBAL_MAX_BODY_SIZE) in your nginx-proxy configuration for reliable limits - Consider bypassing Cloudflare for upload-specific subdomains as the most practical solution
- Verify Cloudflare’s involvement by checking for
cf-rayheaders or temporarily disabling Cloudflare proxy - Regular monitoring of both Cloudflare and Nginx logs will help diagnose future upload issues
The most efficient long-term solution is likely a combination of proper nginx-proxy configuration with environment variables and strategic Cloudflare bypass for upload endpoints, as you’ve already discovered with your upload.mysite.io subdomain.