NeuroAgent

Fix client_max_body_size Not Working in Nginx Docker Proxy

Complete guide to fix client_max_body_size not working in Nginx Docker proxy. Resolve 413 errors with proper configuration and Cloudflare bypass solutions.

Question

client_max_body_size has no effect in Nginx proxy with Docker. Receiving 413 Payload too large

I’m using JWilder Nginx-proxy with Docker and encountering a “413 Payload too large” error when trying to upload files. Despite setting client_max_body_size in my Nginx configuration, the setting doesn’t seem to have any effect.

My Setup:

  • Nginx proxy (JWilder/nginx-proxy) with Docker
  • Django & Gunicorn as the application
  • Cloudflare as the CDN

What I’ve Tried:

  1. Added client_max_body_size 500m; to the server block in my Nginx configuration
  2. Added client_max_body_size 500m; to the location block
  3. Added client_max_body_size 500m; to the http section in the main Nginx configuration
  4. Restarted the container, removed it, removed the image, and rebuilt it
  5. Increased file upload limits in Django & Gunicorn

My Nginx Configuration:

Main Configuration (http section):

nginx
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ' 
                      '$status $body_bytes_sent "$http_referer" ' 
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

Server Configuration:

nginx
server {
    server_name mysite.io;
    access_log /var/log/nginx/access.log vhost;
    http2 on;
    listen 443 ssl;
    ssl_session_timeout 5m;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;
    ssl_certificate /etc/nginx/certs/swyve.io.crt;
    ssl_certificate_key /etc/nginx/certs/swyve.io.key;
    set $sts_header "";
    if ($https) {
        set $sts_header "max-age=31536000";
    }
    add_header Strict-Transport-Security $sts_header always;
    include /etc/nginx/vhost.d/mysite.io;
    location / {
        proxy_pass http://swyve.io;
        set $upstream_keepalive true;
        include /etc/nginx/vhost.d/mysite.io_location;
    }
}

Virtual Host Configuration (/etc/nginx/vhost.d/mysite.io):

nginx
client_max_body_size 500m;

location /static/ {
    alias /app/staticfiles/;
    expires 1y;
    add_header Cache-Control "public, immutable";
}

Location Configuration (/etc/nginx/vhost.d/mysite.io_location):

nginx
client_max_body_size 500m;

proxy_request_buffering off;
proxy_buffering off;

proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
send_timeout 600s;

Additional Information:

I noticed in the response headers a Cloudflare identifier: “cf-ray: 999aa8574833c251-VIE”. Could Cloudflare be limiting file uploads to 100MB on their free plan?

Workaround Found:

I created a new subdomain (upload.mysite.com) specifically for file uploads that bypasses Cloudflare. This resolved the issue, but I had to update my Nginx configuration to support the new subdomain.

Question:

Why isn’t client_max_body_size working in my Nginx proxy configuration, and how can I properly configure it to allow larger file uploads while using Cloudflare?

NeuroAgent

The issue with client_max_body_size not working in your Nginx proxy configuration is likely due to multiple factors, with Cloudflare being the primary culprit based on your setup. The 413 error occurs when incoming file requests exceed configured size limits, but the problem typically isn’t with Nginx itself when using the JWilder nginx-proxy.

Contents

Cloudflare as the Primary Issue

The presence of the cf-ray header in your response confirms that Cloudflare is intercepting your requests before they reach your Nginx server. Cloudflare’s free plan has a strict 100MB file upload limit, and this is likely what’s causing your 413 errors. According to Cloudflare documentation, they inspect and block requests that exceed their size limits before forwarding them to your origin server.

This explains why your Nginx configuration changes had no effect - the requests are never reaching your Nginx proxy. If Cloudflare’s limit is 100MB and you’re trying to upload files larger than that, Cloudflare will respond with a 413 error regardless of your Nginx settings.

JWilder nginx-proxy Configuration

The JWilder nginx-proxy image has a default upload limit of 2MB, which is much lower than your requirements. This default limit takes precedence unless explicitly overridden. The proxy uses dynamic configuration generation based on environment variables and mounted configuration files.

When you manually add client_max_body_size to your configuration files, these settings may be getting overwritten by the nginx-proxy’s dynamic configuration system. The proxy generates configuration files at runtime based on:

  • Environment variables set on the container
  • Docker labels on your application containers
  • Mounted configuration files

Proper Nginx Configuration Methods

Method 1: Environment Variables (Recommended)

The most reliable way to set upload limits with JWilder nginx-proxy is through environment variables. In your docker-compose.yml for the proxy service:

yaml
services:
  proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./certs:/etc/nginx/certs:ro
    environment:
      CLIENT_MAX_BODY_SIZE: "500M"
      GLOBAL_MAX_BODY_SIZE: "500M"

Method 2: Mount Configuration File

Create a custom configuration file and mount it to the proxy:

bash
# Create client_max_body_size.conf
echo "client_max_body_size 500M;" > client_max_body_size.conf

# Run the container with the config mounted
docker run -d \
  --name nginx-proxy \
  -v /var/run/docker.sock:/tmp/docker.sock \
  -v /path/to/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf:ro \
  -p 80:80 -p 443:443 \
  jwilder/nginx-proxy

Method 3: Docker Labels

Set labels on your application container to configure per-host limits:

yaml
services:
  myapp:
    image: myapp
    labels:
      - "client_max_body_size=500M"

Complete Solution Guide

Step 1: Verify Cloudflare Limits

Check your Cloudflare plan limits:

  • Free plan: 100MB maximum file size
  • Pro plan: 100MB maximum file size
  • Business plan: 500MB maximum file size
  • Enterprise plan: Custom limits

If you need to upload files larger than 100MB, you’ll need to either upgrade your Cloudflare plan or bypass it for upload endpoints.

Step 2: Configure nginx-proxy Properly

Modify your nginx-proxy configuration using environment variables:

yaml
version: '3'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./certs:/etc/nginx/certs:ro
    environment:
      CLIENT_MAX_BODY_SIZE: "500M"
      GLOBAL_MAX_BODY_SIZE: "500M"
      # Additional recommended settings
      PROXY_BUFFERING: "off"
      PROXY_REQUEST_BUFFERING: "off"

Step 3: Configure Specific Host Limits

If you need different limits for different hosts, use Docker labels:

yaml
services:
  myapp:
    image: myapp
    labels:
      - "VIRTUAL_HOST=mysite.io"
      - "VIRTUAL_PORT=8000"
      - "client_max_body_size=500M"

Step 4: Test Configuration

After applying changes:

  1. Restart the nginx-proxy container: docker restart nginx-proxy
  2. Check the generated configuration: docker exec nginx-proxy cat /etc/nginx/conf.d/default.conf
  3. Verify the client_max_body_size setting is present

Alternative Approaches

Cloudflare Bypass for Uploads

Your workaround of creating a subdomain that bypasses Cloudflare is the most practical solution for large file uploads. Configure it as follows:

  1. DNS Configuration:

    • Create a CNAME record for upload.mysite.io pointing directly to your server IP
    • Remove the Cloudflare proxy (orange cloud) for this subdomain
  2. nginx-proxy Configuration:

    yaml
    services:
      nginx-proxy:
        # ... existing configuration
        environment:
          CLIENT_MAX_BODY_SIZE: "500M"
          GLOBAL_MAX_BODY_SIZE: "500M"
    
      upload-app:
        image: myapp
        labels:
          - "VIRTUAL_HOST=upload.mysite.io"
          - "VIRTUAL_PORT=8000"
          - "client_max_body_size=1G"  # Even larger limit for uploads
    

Cloudflare Rules

If you need to keep Cloudflare for uploads but increase limits:

  1. Upgrade to Business/Enterprise plan
  2. Custom Rules: Set up edge rules to increase request size limits
  3. Origin Rules: Configure origin request size settings

Verification Steps

To ensure your Nginx configuration is working correctly:

  1. Check nginx-proxy logs:

    bash
    docker logs nginx-proxy | grep "client_max_body_size"
    
  2. Verify generated configuration:

    bash
    docker exec nginx-proxy nginx -T
    
  3. Test file upload without Cloudflare:

    • Temporarily disable Cloudflare proxy for your domain
    • Attempt file upload
    • If successful, Cloudflare was definitely the issue
  4. Check request headers:

    bash
    curl -I -X POST --data-binary "@large_file.zip" http://mysite.io/upload
    

The key takeaway is that Cloudflare’s default limits are likely the root cause of your 413 errors. While properly configuring client_max_body_size in nginx-proxy is important, you’ll need to address the Cloudflare limitation either through bypass, upgrade, or custom rules to successfully handle larger file uploads.

Sources

  1. Nginx: 413 - Request Entity Too Large Error and Solution - nixCraft
  2. nginx - client_max_body_size has no effect - Stack Overflow
  3. How can I change the docker jwilder/nginx-proxy upload limits? - Stack Overflow
  4. docker-compose.yml and client_max_body_size · Issue #690 · nginx-proxy/nginx-proxy
  5. docker push error “413 Request Entity Too Large” - Stack Overflow
  6. Error: 413 “Request Entity Too Large” in Nginx with “client_max_body_size” - Medium

Conclusion

  • Cloudflare’s 100MB limit is the primary barrier to your file uploads, not Nginx configuration
  • Use environment variables (CLIENT_MAX_BODY_SIZE, GLOBAL_MAX_BODY_SIZE) in your nginx-proxy configuration for reliable limits
  • Consider bypassing Cloudflare for upload-specific subdomains as the most practical solution
  • Verify Cloudflare’s involvement by checking for cf-ray headers or temporarily disabling Cloudflare proxy
  • Regular monitoring of both Cloudflare and Nginx logs will help diagnose future upload issues

The most efficient long-term solution is likely a combination of proper nginx-proxy configuration with environment variables and strategic Cloudflare bypass for upload endpoints, as you’ve already discovered with your upload.mysite.io subdomain.