NeuroAgent

How to optimize nginx during speedtest in 3x-ui-pro

Solution for high nginx worker process load during speedtest in 3x-ui-pro. Performance optimization, configuration of limits and server monitoring.

Question

How can I fix the server speed drop during speedtest in nginx+3x-ui-pro?

I have a server configured with the 3x-ui-pro panel where a VPN vless+reality is running. Traffic passes through my domain and is masked using my SNI. I have nginx deployed on my domains to mask the site as a real one. The problem: when clients perform speedtest, the “nginx worker process” loads to 120% and higher, causing server overload. As a result, VPN traffic slows down, 3x-ui becomes inaccessible, and the overall server performance drops. How can this problem be solved? I’m a beginner user, so please provide a detailed explanation.

NeuroAgent

Solving High Nginx Worker Process Load During Speed Testing

To address the issue of high nginx worker process load during speed testing, you need to optimize nginx configuration, limit bandwidth for speed tests, increase the number of worker processes, configure caching, and implement server resource monitoring. Proper configuration of 3x-ui-pro and VPN services is also crucial for load distribution and preventing system overload.

Table of Contents

Main Causes of the Problem

The issue of high nginx load during speed testing occurs due to several factors:

  1. Lack of traffic limits - Speed testing creates maximum network load without any restrictions, leading to overload of nginx worker processes.

  2. Incorrect worker process configuration - By default, nginx uses a limited number of worker processes that cannot handle peak loads.

  3. Traffic masking through nginx - Additional processing of VPN traffic through nginx increases CPU load.

  4. Lack of caching - Each speed test connection is processed without using cache.

  5. Service conflicts - Load from speed testing affects the performance of 3x-ui-pro and VPN services.

Important: When using 3x-ui-pro with nginx, additional load occurs because traffic passes through multiple processing layers: VPN → 3x-ui → nginx → internet.

Optimizing Nginx Worker Processes

Determining the optimal number of worker processes

First, you need to properly configure the number of worker processes in nginx:

nginx
# In /etc/nginx/nginx.conf
worker_processes auto;  # Automatic detection based on CPU cores
worker_cpu_affinity auto;  # Bind processes to CPU cores

For high-load systems, the following is recommended:

nginx
# For systems with 4+ cores
worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000;

Configuring worker connections

Increase the number of connections per worker process:

nginx
events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}

Optimizing worker rlimit

nginx
worker_rlimit_nofile 100000;

Configuring buffering

Optimize buffers to reduce CPU load:

nginx
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 30;
    keepalive_requests 1000;
    reset_timedout_connection on;
    
    client_body_buffer_size 16K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 2 1k;
    
    buffer_size 4k;
    buffers 8 4k;
    client_body_timeout 10;
    send_timeout 2;
}

Configuring Limits for Speed Tests

Limiting speed through nginx

Add limits to your site configuration:

nginx
# In the server block
server {
    # Speed limit for speedtest
    limit_conn_zone $binary_remote_addr zone=addr:10m;
    limit_conn addr 10;
    limit_req_zone $binary_remote_addr zone=req:10m rate=10r/s;
    limit_req zone=req burst=20 nodelay;
    
    # Upload speed limit
    limit_rate_after 1m;
    limit_rate 512k;
}

Using mod_evasive (if installed)

nginx
# For DDoS protection and overload prevention
if ($limit_rate) {
    set $limit_rate $limit_rate;
}

Configuring rate limiting for speedtest

Create a separate configuration for speedtest:

nginx
# /etc/nginx/conf.d/speedtest.conf
server {
    listen 80;
    server_name speedtest.yourdomain.com;
    
    # Strict limits for speedtest
    limit_conn_zone $binary_remote_addr zone=speedtest:10m;
    limit_conn speedtest 5;
    limit_req_zone $binary_remote_addr zone=speedtest_req:10m rate=2r/m;
    limit_req zone=speedtest_req burst=5 nodelay;
    
    limit_rate 256k;
    limit_rate_after 512k;
    
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        # Timeouts
        proxy_connect_timeout 30s;
        proxy_send_timeout 30s;
        proxy_read_timeout 30s;
    }
}

3x-ui-pro Configuration

Optimizing 3x-ui-pro

  1. Increase connection limits in 3x-ui-pro settings:

    • Go to “Settings” → “Inbounds”
    • Increase “Stream” → “Read Timeout” to 30000
    • Set “Write Timeout” to 30000
  2. Configure TLS to reduce load:

    json
    {
      "stream": {
        "security": "tls",
        "tlsSettings": {
          "allowInsecure": false,
          "serverName": "yourdomain.com",
          "alpn": ["h2", "http/1.1"]
        }
      }
    }
    
  3. Optimize real domain operations:

    • Use separate domains for VPN and masking
    • Configure proper SNI and Host headers

Configuring VPN for load distribution

For vless+reality configuration:

json
{
  "inbounds": [
    {
      "type": "vless",
      "tag": "VLESS-REALITY",
      "listen": "127.0.0.1",
      "port": 443,
      "sniff": true,
      "sniffOverrideDestination": true,
      "settings": {
        "clients": [
          {
            "id": "your-uuid",
            "flow": "xtls-rprx-vision"
          }
        ],
        "decryption": "none",
        "fallbacks": [
          {
            "dest": 80,
            "xver": 0
          }
        ]
      },
      "streamSettings": {
        "network": "tcp",
        "security": "reality",
        "realitySettings": {
          "show": false,
          "dest": "yourdomain.com:443",
          "xver": 0,
          "serverNames": ["yourdomain.com"],
          "privateKey": "your-private-key",
          "minClient": "",
          "maxClient": "",
          "maxTimediff": 0
        }
      }
    }
  ]
}

Monitoring and Diagnostics

Server resource monitoring

Install and configure monitoring:

bash
# Install htop
apt install htop -y

# Install monitoring tools
apt install sysstat -y

# Configure statistics collection
sed -i 's/ENABLED="false"/ENABLED="true"/' /etc/default/sysstat
systemctl restart sysstat

Creating a script for nginx monitoring

bash
#!/bin/bash
# /usr/local/bin/nginx-monitor.sh

# Check worker process load
cpu_usage=$(ps aux | grep "nginx: worker process" | awk '{sum += $3} END {print sum}')

# Check number of active connections
connections=$(netstat -ant | grep nginx | wc -l)

# Check memory usage
memory=$(ps aux | grep nginx | awk '{sum += $6} END {print sum/1024}')

echo "CPU Usage: $cpu_usage%"
echo "Active Connections: $connections"
echo "Memory Usage: ${memory}MB"

# If CPU > 80%, send notification
if (( $(echo "$cpu_usage > 80" | bc -l) )); then
    echo "ALERT: Nginx load exceeds 80%" | logger -t nginx-monitor
fi

Configuring automatic reload

Add to crontab:

bash
# Check every 5 minutes
*/5 * * * * /usr/local/bin/nginx-monitor.sh

Logging for analysis

Configure detailed nginx logging:

nginx
http {
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for" '
                    'rt=$request_time uct="$upstream_connect_time" '
                    'uht="$upstream_header_time" urt="$upstream_response_time"';
    
    access_log /var/log/nginx/access.log main buffer=512k flush=1m;
    error_log /var/log/nginx/error.log warn;
}

Additional Optimization Methods

Using caching

Configure caching in nginx:

nginx
http {
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=60m;
    proxy_cache_key "$scheme$request_method$host$request_uri";
    proxy_cache_valid 200 302 10m;
    proxy_cache_valid 404 1m;
    
    server {
        location / {
            proxy_cache cache;
            proxy_pass http://backend;
            proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        }
    }
}

Optimizing TCP stack

Add to /etc/sysctl.conf:

bash
# Optimization for high loads
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_keepalive_time = 120
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5

Apply changes:

bash
sysctl -p

Using gzip compression

nginx
http {
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
}

Disabling unnecessary logs

For high-load environments:

nginx
http {
    access_log off;
    log_not_found off;
}

Practical Configuration Examples

Example of optimized nginx configuration

nginx
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
worker_rlimit_nofile 100000;

events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}

http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 30;
    keepalive_requests 1000;
    reset_timedout_connection on;
    
    # Rate limiting
    limit_conn_zone $binary_remote_addr zone=addr:10m;
    limit_req_zone $binary_remote_addr zone=req:10m rate=10r/s;
    
    client_body_buffer_size 16K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 2 1k;
    
    buffer_size 4k;
    buffers 8 4k;
    client_body_timeout 10;
    send_timeout 2;
    
    # Gzip
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    
    # Logging
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for" '
                    'rt=$request_time uct="$upstream_connect_time" '
                    'uht="$upstream_header_time" urt="$upstream_response_time"';
    
    access_log /var/log/nginx/access.log main buffer=512k flush=1m;
    error_log /var/log/nginx/error.log warn;
    
    # Server blocks
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Example configuration for speedtest with strict limits

nginx
server {
    listen 80;
    listen 443 ssl http2;
    server_name speedtest.yourdomain.com;
    
    # SSL Configuration
    ssl_certificate /path/to/your/cert.pem;
    ssl_certificate_key /path/to/your/key.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    
    # Strict limits for speedtest
    limit_conn_zone $binary_remote_addr zone=speedtest:10m;
    limit_conn speedtest 3;
    limit_req_zone $binary_remote_addr zone=speedtest_req:10m rate=1r/m;
    limit_req zone=speedtest_req burst=3 nodelay;
    
    # Speed limits
    limit_rate 128k;
    limit_rate_after 256k;
    
    # Timeouts
    client_body_timeout 10s;
    client_header_timeout 10s;
    keepalive_timeout 10s;
    send_timeout 10s;
    
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Proxy timeouts
        proxy_connect_timeout 5s;
        proxy_send_timeout 5s;
        proxy_read_timeout 5s;
        
        # Buffer settings
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
        proxy_busy_buffers_size 8k;
        
        # Caching
        proxy_cache cache;
        proxy_cache_valid 200 302 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    }
    
    # Security headers
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header Referrer-Policy "strict-origin-when-cross-origin";
}

Example configuration for main site

nginx
server {
    listen 80;
    listen 443 ssl http2;
    server_name yourdomain.com www.yourdomain.com;
    
    # SSL Configuration
    ssl_certificate /path/to/your/cert.pem;
    ssl_certificate_key /path/to/your/key.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    
    # Light rate limiting for normal traffic
    limit_conn_zone $binary_remote_addr zone=normal:10m;
    limit_conn normal 20;
    limit_req_zone $binary_remote_addr zone=normal_req:10m rate=20r/s;
    limit_req zone=normal_req burst=40 nodelay;
    
    # Higher speed limits for normal traffic
    limit_rate 2m;
    limit_rate_after 5m;
    
    location / {
        root /var/www/html;
        index index.html index.htm;
        
        # Security
        server_tokens off;
        
        # Caching
        expires 1h;
        add_header Cache-Control "public, immutable";
        
        # Compression
        gzip on;
        gzip_vary on;
        gzip_min_length 1024;
        gzip_comp_level 6;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    }
    
    # Location for health checks
    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }
}

Conclusion

To solve the problem of high nginx load during speed testing, you need to:

  1. Optimize the number of worker processes and their settings based on the number of CPU cores
  2. Implement speed limits for speed test traffic through nginx rate limiting
  3. Configure proper timeouts and buffers to reduce CPU load
  4. Optimize 3x-ui-pro with correct TLS and stream settings
  5. Implement monitoring to promptly detect abnormal load
  6. Use caching and compression to reduce load
  7. Optimize the TCP stack for improved network performance

Start with basic worker process optimization and rate limiting, then gradually implement other methods. Regularly monitor the system and adjust settings based on actual load. Remember to backup configurations before making changes and test the system under low-load conditions before deploying to production.

Sources

  1. Official Nginx Documentation - Performance Optimization
  2. Nginx rate limiting documentation
  3. 3x-ui official documentation
  4. TCP stack optimization for Linux
  5. Linux sysctl optimization
  6. Nginx worker process best practices