Nginx is the backbone of modern web infrastructure: it serves static files, terminates TLS, reverse-proxies, load balances and caches — often all in one process. This guide walks through the configuration patterns sysadmins reach for most — reverse proxy, FastCGI and proxy cache, rate limiting, gzip and HTTP/2 — with examples drawn from real production servers.

Base Directory Layout

Nginx config lives under /etc/nginx/. The main entry point is nginx.conf; per-site configs go in sites-available/ with a symlink into sites-enabled/. After any edit, nginx -t && systemctl reload nginx is mandatory.

# /etc/nginx/nginx.conf — baseline
worker_processes auto;
worker_rlimit_nofile 65535;
events { worker_connections 4096; multi_accept on; }

http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 30s;
    keepalive_requests 100;
    types_hash_max_size 2048;
    server_tokens off;

    gzip on;
    gzip_vary on;
    gzip_comp_level 5;
    gzip_min_length 1024;
    gzip_types text/plain text/css application/javascript application/json image/svg+xml;
}

Reverse Proxy Setup

Putting Nginx in front of Node.js, Python or Java backends gives you SSL termination, caching, gzip and log control for free. The backend typically listens on a localhost port such as 127.0.0.1:3000, and Nginx forwards public traffic to it.

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

FastCGI Cache (for PHP)

Hitting PHP-FPM on every request is wasteful for apps like WordPress. FastCGI cache lets Nginx store the rendered page and serve subsequent requests without ever calling PHP. Load drops roughly 10x.

# inside the http { } block
fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2
    keys_zone=WORDPRESS:100m inactive=60m max_size=1g;
fastcgi_cache_key "$scheme$request_method$host$request_uri";

# inside the server { } block
set $skip_cache 0;
if ($request_method = POST) { set $skip_cache 1; }
if ($query_string != "") { set $skip_cache 1; }
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/") { set $skip_cache 1; }
if ($http_cookie ~* "comment_author|wordpress_logged_in|wp-postpass") { set $skip_cache 1; }

location ~ \.php$ {
    fastcgi_cache WORDPRESS;
    fastcgi_cache_valid 200 60m;
    fastcgi_cache_bypass $skip_cache;
    fastcgi_no_cache $skip_cache;
    fastcgi_pass unix:/run/php/php8.2-fpm.sock;
    include fastcgi_params;
    add_header X-Cache-Status $upstream_cache_status;
}

Proxy Cache (for Upstream APIs)

A 5- to 60-second cache layer in front of your backend can massively accelerate slow API endpoints. It is ideal for responses that change infrequently and behave like static content.

proxy_cache_path /var/cache/nginx/proxy levels=1:2 keys_zone=APICACHE:50m max_size=500m inactive=30m;

location /api/ {
    proxy_cache APICACHE;
    proxy_cache_valid 200 30s;
    proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    proxy_cache_lock on;
    proxy_pass http://127.0.0.1:3000;
    add_header X-Proxy-Cache $upstream_cache_status;
}

Rate Limiting

The first line of defense against brute force and DDoS. limit_req_zone caps request rate per IP, while limit_conn_zone caps concurrent connections.

# inside the http { } block
limit_req_zone $binary_remote_addr zone=global:10m rate=15r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=2r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;

# inside the server { } block
limit_conn addr 50;
limit_req zone=global burst=30 nodelay;

location /login {
    limit_req zone=login burst=3 nodelay;
    proxy_pass http://127.0.0.1:3000;
}

HTTP/2 and HTTP/3

HTTP/2 is enabled with listen 443 ssl http2;. HTTP/3 (QUIC) needs Nginx 1.25+ and listen 443 quic reuseport;. HTTP/3 shines on mobile connections where packet loss is common.

Security Headers

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

Common Pitfalls

  • server_name typo — the site does not resolve
  • SSL file paths wrong — Nginx refuses to start
  • worker_connections too low — connection refused under high traffic
  • client_max_body_size too small — 413 on uploads
  • proxy_read_timeout too short — 504 on long requests

Conclusion

Memorising directives is not enough to get real value out of Nginx — caching, rate limiting and reverse proxy must be calibrated to your site's traffic profile. For Nginx tuning, WordPress FastCGI cache setup or a reverse-proxy architecture review, the KEYDAL hosting team is happy to help.

Get help tuning your Nginx stack

Traffic-specific Nginx configuration, cache strategy and security hardening Contact us

WhatsApp