The Art of Proxying: Mastering Traffic Management Like a Pro


Welcome back to the Taming Nginx series! In this article, we’re diving into the art of reverse proxying. While many guides focus on the basics, this article is for those who want to push Nginx to its limits, we'll explore advanced and peculiar use cases that can take your configuration to the next level.
If you missed the earlier articles, here’s a quick recap:
Smarter Defaults for Subdomains and SSL - The first article in the series where we talk about managing subdomains globally
Why Your Web Server Structure is Holding You Back (And How to Fix It) - A guide to structuring folders for managing multiple domains and services.
Wildcard SSL and Beyond - We explored SSL, wildcard certificates and mutual SSL setups.
This time, it’s all about unlocking the potential of Nginx as a reverse proxy. We’ll examine how each use case addresses real-world challenges and elevates your configurations.
Reverse Proxy Refresher
Here's a quick primer on what a reverse proxy does:
At its core, a reverse proxy sits between clients and backend servers, forwarding requests and responses while optionally performing additional tasks like SSL termination, caching or load balancing.
If you’ve followed the earlier articles in this series, you’ll know that we redirect all HTTP traffic to HTTPS by default and wildcard SSL certificates are used to secure subdomains. The examples in this article build upon that foundation, assuming the same ssl_params
file for shared SSL settings and no incoming requests to port 80.
Also note that I've deliberately excluded mutual SSL here (since it was covered before), along with caching & security - as I will talk about that in detail in the next article in the series.
1. Layer 7 Load Balancing with Sticky Sessions
Layer 7, also known as the Application Layer in the OSI model, is where things start getting interesting. While the lower layers focus on moving data from point A to point B, Layer 7 cares about what the data actually is. Think HTTP requests, cookies, headers, and yes, those mysterious Nginx config files you’ve probably copy-pasted more than once.
This is also where load balancing gets a little smarter. Instead of simply distributing traffic evenly across servers, Layer 7 load balancers can make routing decisions based on actual application data. For example, login traffic might go to one server, while API calls are directed elsewhere.
Sticky sessions solve a very human problem: keeping users tied to the same server during a session. It’s like remembering which checkout line you were in at the supermarket. Without it, users might lose their shopping carts or get mysteriously logged out. With it, everything runs more smoothly, even if the magic is hidden under layers of headers and cookies.
Configuration:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
}
server {
listen 443 ssl;
server_name example.com;
include /etc/nginx/ssl_params;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
What's going on here?
Hopefully, you weren’t expecting anything special here. The ip_hash
directive uses the client’s IP address to generate a consistent hash value, which determines the target server within the upstream block. This method works as long as your client’s IP remains stable and makes short work of most session-dependent tasks like e-commerce or user dashboards.
2. Protocol Translation: WebSockets, gRPC and HTTP/2
Before WebSockets came along, we had to fake real-time communication using long polling or by bombarding servers with constant HTTP requests. It worked, sort of. But it felt like trying to have a conversation by mailing each other letters every few seconds. Not exactly efficient.
Then WebSockets showed up in the HTML5 spec around 2011, and everything changed. Suddenly, we had a persistent, two-way connection over a single TCP channel. That meant the server could actually talk back without waiting for the client to ask first. Live chat, real-time notifications, and collaborative tools suddenly felt snappy and alive, all thanks to WebSockets.
It was a huge shift for the web. We went from “refresh to see if anything changed” to “watch it happen in real time.”
The good news? If you're using Nginx, wiring up WebSockets isn’t some mystical process. It plays nicely out of the box, acting as the middleman between your users and whatever magic your backend is doing.
WebSockets:
server {
listen 443 ssl;
server_name example.com;
include /etc/nginx/ssl_params;
location /ws/ {
proxy_pass http://websocket_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
What's going on here?
So the proxy_pass
directive specifies the backend server or application that will handle WebSocket traffic. It allows Nginx to forward WebSocket requests from clients to the specified server.
The proxy_http_version 1.1
directive ensures that HTTP/1.1 is used for the connection, which is required for WebSocket compatibility.
The proxy_set_header Upgrade $http_upgrade
and proxy_set_header Connection "upgrade"
directives manage the protocol upgrade from HTTP to WebSocket. These headers signal the backend server to switch protocols and maintain the WebSocket connection.
Now let's talk about gRPC (gRPC Remote Procedure Call) - a modern, high-performance framework developed by Google in 2015. Built on top of HTTP/2, it enables efficient, bi-directional streaming and multiplexing over a single connection, making it ideal for real-time and low-latency applications.
gRPC with HTTP/2:
server {
listen 443 ssl;
server_name grpc.example.com;
include /etc/nginx/ssl_params;
http2 on; # Enable HTTP/2 for this server block
location / {
grpc_pass grpc://127.0.0.1:50051;
}
}
What's going on here?
Not much! The grpc_pass
directive is the heart of this configuration, forwarding gRPC traffic to the backend application listening on port 50051. Unlike HTTP/1.x, gRPC relies on HTTP/2, which enables multiplexed streams over a single connection.
3. API Gateway with Rate Limiting
APIs are the backbone of modern apps. They power everything from social feeds to online payments, and without them, most apps would just sit there looking pretty but doing absolutely nothing.
Of course, once you open up an API to the world, you're also inviting in everything from curious developers to overenthusiastic bots. If you’re not careful, one rogue script can bring your backend to its knees.
That’s where rate limiting comes in. It keeps things fair by controlling how often someone (or something) can make requests. Think of it like a bouncer at a club. One request? Come on in. A thousand requests in five seconds? Take a seat.
Back in the day, we cobbled this together with custom scripts or middleware. It worked (mostly!) but it wasn’t elegant, and it definitely didn’t scale well.
Thankfully, Nginx makes this much simpler. With built-in rate limiting baked into your proxy configuration, you can manage traffic cleanly and efficiently, without resorting to duct tape and a prayer.
Configuration:
http {
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=5r/s;
server {
listen 443 ssl;
server_name api.example.com;
include /etc/nginx/ssl_params;
location / {
limit_req zone=api_limit burst=10 nodelay;
proxy_pass http://backend_api;
}
}
}
What's going on here?
The limit_req_zone
directive defines a shared memory zone for tracking requests per IP. This shared memory allows Nginx to monitor how many requests each client sends over a specified time period.
The limit_req
directive works as a gatekeeper, allowing only a specific number of requests to pass through per second for each IP address. If the rate is exceeded, excess requests are either delayed (if configured) or dropped, protecting the backend from being overwhelmed. This setup provides a way to ensure fair resource usage while safeguarding backend performance.
4. Dynamic Upstream Selection
Dynamic upstream selection is a lifesaver when you're juggling staging environments, multi-tenant apps, or messy deployment workflows. The old-school way meant setting up multiple, mostly copy-pasted server blocks. Not fun. Not scalable.
With Nginx’s set
directive, you can route traffic based on things like query parameters, headers, or cookies. It’s clean, flexible, and saves you from maintaining five different configs that all do basically the same thing.
Configuration:
server {
listen 443 ssl;
include /etc/nginx/ssl_params;
location / {
set $upstream http://backend1;
if ($arg_env = "staging") {
set $upstream http://staging_backend;
}
proxy_pass $upstream;
}
}
What's going on here?
The set
directive is a powerful tool for dynamically assigning variable values based on specific conditions. In this configuration, set
is used to determine the upstream server by inspecting the arg_env
query parameter. If arg_env
is set to "staging," the request is routed to the staging backend; otherwise, it defaults to the production backend.
This ability to assign variables conditionally streamlines complex routing scenarios, allowing configurations to remain concise, modular and readable. set
can also be used for other creative use cases, like adjusting cache keys, generating custom headers or controlling feature flags.
5. Reverse Proxy with Backend Health Checks
How many times have you had a backend service go down and your frontend go berserk or present you with the infamous 502
gateway errors?
Health checks provide an elegant solution to this problem and Nginx comes with this built-in. If a server goes down, Nginx temporarily removes it from the upstream pool until it recovers.
Configuration:
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com backup;
health_check interval=5s rise=2 fall=3 timeout=2s;
}
server {
listen 443 ssl;
include /etc/nginx/ssl_params;
location / {
proxy_pass http://backend;
}
}
What's going on here?
The health_check
directive steps in as your reverse proxy's safety net. It continuously pings the upstream servers at intervals you define (here, every 5 seconds) to verify their responsiveness. If a server fails to respond correctly, it’s temporarily removed from the pool.
But it doesn't stop there. You can configure parameters like rise
and fall
to determine how many consecutive successes or failures are required for a server to be reinstated or deactivated respectively. This ensures that temporary blips don’t immediately mark a server as unhealthy but where persistent issues are handled gracefully.
Your frontend remains resilient and cryptic error messages for users become a thing of the past.
What’s Next?
At this point, it’s clear that Nginx isn’t just a reverse proxy. It’s more like a traffic control tower for your infrastructure. From sticky sessions to dynamic upstreams, it quietly handles a lot more than most give it credit for.
Next, we’ll take things a bit lower level and explore streaming and protocol handling with the stream {}
block. Because why stop at HTTP when there’s an entire transport layer waiting to be tamed?
If you’ve got clever configs, strange use cases, or questions you’ve been saving for a Stack Overflow post, feel free to drop them in the comments. Always happy to trade notes.
Enjoyed this post? Subscribe to the newsletter to follow the rest of the series. Or don’t. (But then Nginx might start sending all your traffic to /dev/null
.)
Subscribe to my newsletter
Read articles from Jason Joseph Nathan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Jason Joseph Nathan
Jason Joseph Nathan
Yo! I’m J, your go-to geek at Geekist. With nearly two decades under my belt, I craft high-performance software that’s as sleek as it is functional, specialising in JavaScript/TypeScript and modern full-stack solutions. Beyond code, my world revolves around music, mentoring budding developers, and cracking up my two wonderful daughters. Whether jamming out to Punjabi beats with my wife or leading dynamic teams across continents, I’m all about mixing passion with innovation. Here at Geekist, I share top-notch tutorials, tech wisdom, and a bit of humor to spice up your dev journey. So, whether you’re looking to skill up or just hang out, you’re in the right place. Welcome to our community of creators and thinkers!