Nginx powers roughly 34% of all web servers worldwide (Netcraft, 2026). It earned that dominance not by serving static HTML, but by sitting in front of application backends — terminating TLS, spreading load, enforcing rate limits, and shielding upstream services from the raw internet. Yet most tutorials still show a 2015-era proxy_pass snippet and call it done.
This guide covers the complete production-grade Nginx reverse proxy setup that teams actually run in 2026: automated TLS with Certbot, TLS 1.3, JSON-structured access logs, per-zone rate limiting, least_conn load balancing, and the full security header stack. The example backend is a FastAPI or Flask/Node app listening on a local port, but the patterns apply universally.
1. Nginx Roles in 2026 — and Why Choose It
Before touching config files, it is worth being clear about what Nginx is doing in each deployment:
- Reverse proxy — receives external HTTP/HTTPS traffic and forwards it to one or more upstream application processes. The app never sees raw internet clients.
- TLS terminator — handles the TLS handshake so the application speaks plain HTTP internally, simplifying app code and certificate management.
- Load balancer — distributes requests across multiple upstream instances using pluggable algorithms.
- Static asset server — serves CSS, JS, and images directly with optimal caching headers, bypassing the application entirely.
Nginx vs Caddy vs Traefik in 2026:
| Feature | Nginx | Caddy | Traefik |
|---|---|---|---|
| TLS auto-renewal | Via Certbot | Built-in | Built-in |
| Config style | Declarative files | Caddyfile / JSON | YAML / Labels |
| Dynamic config | Reload required | Live | Live |
| Performance | Best in class | Excellent | Good |
| Observability | Flexible log format | Structured JSON | Prometheus-native |
Nginx is the right choice when you need maximum throughput, fine-grained control over every header and timeout, and a stable config-file workflow. Caddy and Traefik win in container orchestration environments where services appear and disappear dynamically.
2. Installation
Ubuntu/Debian — Official Nginx Repository
The version shipped in Ubuntu's default repositories lags the upstream release by months. For production use, install from Nginx's own apt repository to get the latest stable release with all standard modules compiled in.
# Add Nginx signing key
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
# Add the stable repository (Ubuntu 24.04 noble)
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu noble nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list
sudo apt update && sudo apt install nginx
sudo systemctl enable --now nginx
# Verify
nginx -v # nginx/1.26.x or later
Stable vs Mainline
- Stable (1.26.x) — recommended for production. Bug fixes only; no new features mid-release.
- Mainline (1.27.x) — includes new features under active development. Fine for development environments, acceptable in production if you want the very latest HTTP/3 and other modules.
Key Directories
/etc/nginx/ # All configuration lives here
/etc/nginx/nginx.conf # Root config file
/etc/nginx/conf.d/ # Drop-in site configs (loaded automatically)
/var/log/nginx/ # access.log and error.log
/usr/share/nginx/html/ # Default document root
/var/run/nginx.pid # PID file
3. Configuration Structure
The nginx.conf Skeleton
The top-level nginx.conf should stay minimal and delegate to conf.d/ files:
user nginx;
worker_processes auto; # One worker per CPU core
worker_rlimit_nofile 65535; # Match or exceed OS file descriptor limit
error_log /var/log/nginx/error.log notice;
pid /run/nginx.pid;
events {
worker_connections 4096; # Max simultaneous connections per worker
use epoll; # Linux event model (set automatically on Linux)
multi_accept on; # Accept all pending connections at once
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on; # Send headers in one packet
tcp_nodelay on; # Disable Nagle for keep-alive connections
keepalive_timeout 75s;
keepalive_requests 1000;
server_tokens off; # Don't advertise Nginx version in headers
include /etc/nginx/conf.d/*.conf;
}
Tuning notes:
worker_processes autoreads the CPU count at startup. On an 8-core machine you get 8 workers.worker_connections 4096means up to8 × 4096 = 32,768simultaneous connections for the process.worker_rlimit_nofilemust be at or aboveworker_connectionstimes two (sockets come in pairs). Set the OS limit withulimit -n 65535or via/etc/security/limits.conf.server_tokens offremoves theServer: nginx/1.26.xheader, which reduces information leakage.
Ubuntu: sites-available / sites-enabled Convention
Some distributions (and Certbot) use an alternative layout:
/etc/nginx/sites-available/myapp # Authoritative config file
/etc/nginx/sites-enabled/myapp # Symlink → sites-available/myapp
Enable a site with:
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
Both layouts work. The conf.d/ pattern is simpler and what Nginx upstream recommends.
4. Reverse Proxy — The Core Pattern
Create /etc/nginx/conf.d/myapp.conf:
upstream api_backend {
server 127.0.0.1:8000;
keepalive 32; # Keep 32 idle connections to the upstream open
}
server {
listen 80;
server_name api.example.com;
# Proxy to application
location / {
proxy_pass http://api_backend;
proxy_http_version 1.1;
# Pass real client information to the application
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Required for keepalive to upstream
proxy_set_header Connection "";
# Timeouts
proxy_connect_timeout 10s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
}
}
Why Each Header Matters
X-Real-IP— the client's IP address. Without this, your app sees127.0.0.1for every request. FastAPI reads it viarequest.client.hostwhentrusted_hostsis configured; Flask viarequest.remote_addrafter settingProxyFix.X-Forwarded-For— the full chain of proxy IPs.$proxy_add_x_forwarded_forappends the real client IP to any existingX-Forwarded-Forheader, preserving the hop chain.X-Forwarded-Proto— tells the app whether the original request was HTTP or HTTPS. Critical for generating correct redirect URLs.Connection ""— clears theConnection: closeheader that HTTP/1.0 clients send, allowing keepalive reuse of upstream connections.
Proxying WebSockets
WebSocket proxying requires HTTP/1.1 and header upgrades:
location /ws/ {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 3600s; # Keep WebSocket connections alive longer
}
5. TLS with Certbot (Let's Encrypt)
Let's Encrypt issues free, trusted TLS certificates valid for 90 days. Certbot automates obtaining and renewing them, and its Nginx plugin edits your config automatically.
Install Certbot
sudo apt install certbot python3-certbot-nginx
Obtain a Certificate
Your DNS must already point to the server before running this command.
sudo certbot --nginx -d api.example.com -d www.api.example.com
Certbot will:
1. Verify domain ownership via HTTP-01 challenge (temporarily serves a token file).
2. Download the certificate chain to /etc/letsencrypt/live/api.example.com/.
3. Modify your Nginx config to add TLS directives.
4. Install a systemd timer or cron job for automatic renewal.
Test Auto-Renewal
sudo certbot renew --dry-run
Renewal happens automatically twice daily. The systemd timer is at certbot.timer:
systemctl status certbot.timer
The Resulting TLS Config (Certbot Baseline)
After Certbot modifies your config, it looks roughly like:
server {
listen 443 ssl;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://api_backend;
# ... proxy headers as above
}
}
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
Hardening to TLS 1.3 (2026 Standard)
Certbot's defaults still allow TLS 1.2 for compatibility. In 2026, if you control your client base (APIs, internal tools, SPAs), it is reasonable to enforce TLS 1.3 only:
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off; # Not needed for TLS 1.3 (no cipher negotiation)
If you need to support older clients (legacy mobile apps, enterprise software), keep TLS 1.2:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
This cipher list matches the Mozilla SSL Configuration Generator "Intermediate" profile, which targets Firefox 27+, Chrome 30+, IE 11+.
OCSP Stapling
OCSP stapling caches the certificate revocation response on the server, eliminating the round trip the browser would otherwise make to the CA's OCSP server during the TLS handshake. This reduces handshake latency by 100–300ms.
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/api.example.com/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
Use Qualys SSL Labs to verify your TLS config scores A+.
6. Rate Limiting
Nginx implements token-bucket rate limiting with limit_req_zone. Define zones in the http {} block (in nginx.conf or a dedicated conf.d/rate_limits.conf) and apply them in location blocks.
Defining Zones
# http block in nginx.conf or a shared conf.d file
# Zone: limit by client IP — 10 requests per second, 10MB shared memory
limit_req_zone $binary_remote_addr zone=api_ip:10m rate=10r/s;
# Zone: limit by API key header — useful for authenticated APIs
limit_req_zone $http_x_api_key zone=api_key:10m rate=100r/s;
# Zone: stricter limits for login/auth endpoints
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
# Connection-count zone (different from request rate)
limit_conn_zone $binary_remote_addr zone=addr:10m;
Memory sizing: 10MB stores approximately 160,000 IP address states. For high-traffic sites, 50–100MB is more appropriate.
Applying Limits in Location Blocks
server {
# ...
location /api/ {
# Allow bursts of 20 requests; reject excess immediately (nodelay)
limit_req zone=api_ip burst=20 nodelay;
limit_conn addr 20;
limit_req_status 429; # Return 429 Too Many Requests (not 503)
proxy_pass http://api_backend;
}
location /api/v1/ {
# If API key header is present, use the generous key-based limit
# Falls back to IP-based limit if no key supplied
limit_req zone=api_key burst=200 nodelay;
limit_req zone=api_ip burst=20 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
}
location /auth/login {
limit_req zone=login burst=5 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
}
}
Parameters explained:
burst=20— the leaky bucket can absorb a spike of 20 requests before rate enforcement kicks in.nodelay— process burst requests immediately rather than queuing them at the sustained rate. Withoutnodelay, a burst of 20 requests would be spread over 2 seconds; withnodelaythey are served instantly and the excess is simply rejected.limit_req_status 429— RFC 6585 specifies 429 for rate limit responses. The default is 503, which some monitoring tools interpret as a server error.
Serving a Custom 429 Page
error_page 429 /rate_limit.json;
location = /rate_limit.json {
default_type application/json;
return 429 '{"error": "rate_limit_exceeded", "message": "Too many requests. Please slow down.", "retry_after": 60}';
}
7. Security Headers
Security headers are HTTP response headers that instruct the browser on how to handle the content. They defend against clickjacking, MIME sniffing, XSS, and protocol downgrade attacks. Add them in a shared snippet or directly in the server {} block.
# Prevent the page from being loaded in an iframe (clickjacking protection)
add_header X-Frame-Options "SAMEORIGIN" always;
# Prevent MIME-type sniffing
add_header X-Content-Type-Options "nosniff" always;
# Control referrer information sent to other sites
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Restrict browser features
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# HTTP Strict Transport Security — tell browsers to always use HTTPS
# Start with a short max-age and increase after testing
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Content Security Policy — restrict which sources can load resources
# This example suits a pure API server; SPAs need a richer policy
add_header Content-Security-Policy "default-src 'none'; frame-ancestors 'none'" always;
The always keyword makes Nginx add the header even on error responses (4xx, 5xx). Without it, headers are only added on 200 OK responses.
HSTS Preload
Once you've set Strict-Transport-Security with a long max-age, you can apply to have your domain included in browser-native HSTS preload lists:
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
Submit at hstspreload.org. Be certain before doing this: once submitted, removing a domain from the preload list takes months and requires browsers to push an update.
Content Security Policy for SPAs
For applications that serve frontend code, the CSP needs to be more permissive:
add_header Content-Security-Policy
"default-src 'self'; \
script-src 'self' 'nonce-{NONCE}'; \
style-src 'self' 'unsafe-inline'; \
img-src 'self' data: https:; \
connect-src 'self' https://api.example.com; \
frame-ancestors 'none'; \
upgrade-insecure-requests" always;
For API-only servers, default-src 'none' is the correct strict setting.
8. Caching Static Assets
Static assets (CSS, JS, images, fonts) should be served directly by Nginx, never by the application, and cached aggressively.
location /static/ {
alias /var/www/myapp/static/;
# Cache for one year — only correct for content-hashed filenames
expires 1y;
add_header Cache-Control "public, immutable";
add_header Vary "Accept-Encoding";
# Gzip compression on-the-fly
gzip on;
gzip_types text/css
application/javascript
application/json
image/svg+xml
font/woff2;
gzip_min_length 256; # Don't compress tiny files
gzip_comp_level 5; # Balance CPU cost vs compression ratio
# Serve pre-compressed .gz files if they exist
gzip_static on;
}
immutable Cache-Control hint: tells the browser not to even send a conditional request (If-None-Match) for the full max-age duration. This is only correct when file names include a content hash (e.g., app.a3f9c2.js). With content hashing, the URL changes when content changes, so immutable caching is safe.
Upstream Cache for API Responses
For cacheable API responses, Nginx can act as a reverse proxy cache:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m
max_size=1g inactive=60m use_temp_path=off;
server {
location /api/public/ {
proxy_cache api_cache;
proxy_cache_valid 200 1m; # Cache 200 responses for 1 minute
proxy_cache_use_stale error timeout updating;
proxy_cache_lock on; # Prevent thundering herd on cache miss
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://api_backend;
}
}
9. Load Balancing
Add multiple upstream servers and choose a balancing algorithm:
upstream api_backend {
# Algorithm: least_conn routes to the server with fewest active connections.
# This is usually better than round-robin for APIs with variable latency.
least_conn;
server 10.0.1.10:8000 weight=3; # Gets 3x more traffic than weight=1 servers
server 10.0.1.11:8000; # weight=1 by default
server 10.0.1.12:8000 backup; # Only used when all primary servers are down
keepalive 64; # Idle keepalive connections per worker
keepalive_requests 1000;
keepalive_timeout 75s;
}
Balancing Algorithms Compared
| Algorithm | Directive | Best For |
|---|---|---|
| Round-robin | (default) | Homogeneous backends, uniform request cost |
| Least connections | least_conn |
APIs with variable response times |
| IP hash | ip_hash |
Session affinity (stateful apps without sticky sessions) |
| Random | random |
Large upstream pools |
least_conn is almost always the right choice for modern API backends. Round-robin works well when request cost is uniform (e.g., static file serving), but for API calls where some requests take 10ms and others take 500ms, least_conn prevents slow servers from accumulating a backlog.
Passive Health Checks (Open Source)
Nginx Plus has active health checks. In open-source Nginx, passive health detection uses max_fails and fail_timeout:
upstream api_backend {
least_conn;
server 10.0.1.10:8000 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8000 max_fails=3 fail_timeout=30s;
server 10.0.1.12:8000 backup;
keepalive 64;
}
After 3 consecutive failures within 30 seconds, Nginx marks the server as unavailable for 30 seconds, then tries it again.
10. Performance Tuning
The following settings belong in the http {} block of nginx.conf:
http {
# Worker and connection settings (already covered above)
worker_connections 4096;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Keepalive
keepalive_timeout 75s;
keepalive_requests 1000;
# Client request limits
client_max_body_size 64m; # Max upload size; adjust for your app
client_body_timeout 30s;
client_header_timeout 30s;
# Hide version from error pages
server_tokens off;
# Open file cache — reduces stat() syscalls for static files
open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
Stub Status Page (Monitoring)
Enable the built-in status page for monitoring integration:
server {
listen 127.0.0.1:8080; # Localhost only — never expose publicly
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
}
This exposes active connections, accepts/requests counts, and reading/writing/waiting states. Tools like nginx-prometheus-exporter scrape this endpoint to produce Prometheus metrics.
11. JSON Access Logging
Structured JSON logs integrate directly into log aggregation pipelines (Loki, Elasticsearch, Datadog) without parsing configuration. Define the format in the http {} block:
log_format json_combined escape=json
'{'
'"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$uri",'
'"args":"$args",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_addr":"$upstream_addr",'
'"upstream_response_time":"$upstream_response_time",'
'"upstream_status":"$upstream_status",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"http_x_forwarded_for":"$http_x_forwarded_for",'
'"http_x_api_key":"$http_x_api_key",'
'"request_id":"$request_id"'
'}';
access_log /var/log/nginx/access.log json_combined;
Key fields for operational use:
request_time— total time to serve the request including upstream wait. High values indicate slow upstreams.upstream_response_time— time the upstream spent generating the response. Compare withrequest_timeto isolate network vs processing delays.upstream_status— the HTTP status from the backend, before any Nginx transformation.request_id— set the unique request ID with$request_id(available in Nginx 1.11.0+). Pass it to the upstream withproxy_set_header X-Request-ID $request_id;for correlated tracing.
Error Log Levels
error_log /var/log/nginx/error.log warn;
Valid levels (most to least verbose): debug, info, notice, warn, error, crit, alert, emerg. In production, warn or error is appropriate. debug generates extremely high volume and should only be used when actively debugging.
12. Complete Production Config
Below is a single, self-contained config file combining all of the above for api.example.com:
# /etc/nginx/conf.d/api.example.com.conf
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_ip:10m rate=10r/s;
limit_req_zone $http_x_api_key zone=api_key:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
limit_conn_zone $binary_remote_addr zone=addr:10m;
upstream api_backend {
least_conn;
server 127.0.0.1:8000 max_fails=3 fail_timeout=30s;
keepalive 32;
}
# HTTP → HTTPS redirect
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
# HTTPS server
server {
listen 443 ssl;
http2 on;
server_name api.example.com;
# TLS certificates (managed by Certbot)
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
# TLS parameters — Mozilla Intermediate profile, adjusted for 2026
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/api.example.com/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=()" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Content-Security-Policy "default-src 'none'; frame-ancestors 'none'" always;
# Logging
access_log /var/log/nginx/api.example.com.access.log json_combined;
error_log /var/log/nginx/api.example.com.error.log warn;
# Static assets
location /static/ {
alias /var/www/myapp/static/;
expires 1y;
add_header Cache-Control "public, immutable";
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_min_length 256;
}
# Auth endpoint — strict rate limit
location /auth/login {
limit_req zone=login burst=5 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
proxy_set_header Connection "";
}
# API endpoints
location /api/ {
limit_req zone=api_ip burst=20 nodelay;
limit_req zone=api_key burst=200 nodelay;
limit_conn addr 20;
limit_req_status 429;
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
proxy_set_header Connection "";
proxy_connect_timeout 10s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
# Custom error pages
error_page 429 /errors/429.json;
error_page 502 503 504 /errors/5xx.json;
location ^~ /errors/ {
internal;
default_type application/json;
}
}
13. Testing and Debugging
Config Validation
Always validate before reloading:
sudo nginx -t # Test config syntax
sudo nginx -T # Dump the entire resolved config (useful for debugging includes)
sudo nginx -s reload # Graceful reload — no dropped connections
nginx -t exits with code 0 on success and 1 on failure. Use this in CI/CD pipelines:
sudo nginx -t && sudo nginx -s reload || echo "Config test failed"
Live Log Monitoring
# Follow access log in real time
sudo tail -f /var/log/nginx/access.log
# Filter for 5xx errors
sudo tail -f /var/log/nginx/access.log | grep '"status":5'
# Follow systemd journal for nginx service
sudo journalctl -u nginx -f
Check Response Headers
# Check TLS and security headers
curl -sI https://api.example.com
# Full verbose TLS handshake info
curl -sv https://api.example.com 2>&1 | grep -E "^\*"
# Test rate limiting (send 30 quick requests)
for i in $(seq 1 30); do curl -so /dev/null -w "%{http_code}\n" https://api.example.com/api/; done
Connection Counting with ss
# Count established connections to port 443
ss -nt | grep ':443' | grep ESTAB | wc -l
# Show connections per state
ss -nt | awk 'NR>1 {print $1}' | sort | uniq -c | sort -rn
ngxtop — Real-time Log Analysis
pip install ngxtop
# Analyze live access log with JSON format
ngxtop -f json -a status
SSL Labs Test
Submit your domain to Qualys SSL Labs to get a letter grade and detailed analysis of your TLS configuration. An A+ score requires:
- No support for protocols below TLS 1.2
- HSTS with
max-age >= 180 days - No use of RC4, 3DES, or export cipher suites
- OCSP stapling enabled
Putting It Together: Deployment Checklist
Before going live, verify each of the following:
sudo nginx -tpasses without errors.curl -sI https://api.example.comreturnsStrict-Transport-Securityand other security headers.curl -sI http://api.example.comreturns301redirect to HTTPS.- SSL Labs scores A or A+.
sudo certbot renew --dry-runsucceeds.- Rate limiting responds with
429(not503) when triggered. proxy_set_header X-Forwarded-Foris passing the real client IP to the application./var/log/nginx/access.logis writing valid JSON.- Upstream health:
curl http://127.0.0.1:8080/nginx_statusshows expected connection counts. - Log rotation is configured (
/etc/logrotate.d/nginxshould exist by default).
Further Reading
- Nginx Official Documentation — the authoritative reference for every directive
- Mozilla SSL Configuration Generator — generate TLS configs matched to your Nginx version and target browser support
- Certbot Instructions — EFF's step-by-step Certbot guides per platform
- Qualys SSL Labs Server Test — validate and grade your TLS configuration
- Nginx Reverse Proxy Admin Guide — official admin guide for proxy configuration
- NGINX Cookbook — Derek DeJonghe, O'Reilly (2nd ed.) — comprehensive reference for advanced patterns