Whitelisting API Endpoints Using NGINX: A Clean Access Control Strategy


Recently, I worked on a requirement where we needed to restrict API access in our application to only a specific set of endpoints. The goal was simple but powerful:
“If an endpoint isn't explicitly allowed, it shouldn't even reach the backend.”
To achieve this, we used NGINX as a gatekeeper, configuring it to only permit defined routes and HTTP methods. Any non-whitelisted requests would be met with a 404 Not Found
response even if the endpoint technically existed in the application.
Blacklisting: In blacklisting, everything is allowed except what you block.
Whitelisting: It is a security approach where only explicitly allowed actions are permitted, and everything else is denied by default. Whitelisting is more secure because it follows a ‘deny by default’ approach.
This post walks through:
Why this approach makes sense
How I implemented it using NGINX
Handling different HTTP methods per route
Practical challenges and tips
A real-world use case for multi-tenant applications
Why This Approach?
In our backend service, we deal with a mix of public and internal APIs. While some endpoints must remain open (e.g., health checks, public data), others contain sensitive logic that should be tightly restricted.
Instead of embedding this logic within the application, we moved it to the NGINX layer, achieving:
Better separation of concerns
Centralized endpoint management
Easy audit and maintainability
We also follow a single-branch architecture where multiple clients (say, Client A and Client B) share the same codebase. In such scenarios, it becomes critical to:
Prevent clients from accessing each other's endpoints
Avoid exposing unused or under-development APIs
The Whitelist Strategy
The core idea is to whitelist endpoints, including the allowed HTTP methods. Only requests that match both the path and method are passed through to the backend; everything else is blocked at the NGINX level.
Rules:
GET /api/data
→ Allowed (if whitelisted)POST /api/data
→ Blocked (if not whitelisted)GET /api/private
→ Blocked (not whitelisted)
How I Implemented It in NGINX
Here’s a simplified version of the NGINX configuration I used:
server {
listen 80;
server_name example.com;
location /api {
if ($is_allowed = 0) {
return 404;
}
proxy_pass http://localhost:5000;
}
}
The key is the map
directive, which checks both the HTTP method and the URI path.
http {}
- applies globallyserver {}
- applies to a specific virtual hostlocation {}
- applies to a specific path
Here is a screenshot of my NGINX configuration where I have implemented whitelisting for a specific endpoint.
The Logic Behind map
map "$request_method:$request_uri" $is_allowed {
~^GET:/api/PathQueryparameter(?:/[^?]+(?:/[^?]+)*)???.+$ 1;
# Add more whitelist entries here
default 0;
}
What’s happening here:
The
map
directive inspects the incoming request’s method and URI.If the method and path combination matches a whitelisted pattern,
$is_allowed
is set to1
.Otherwise, it's
0
by default → resulting in a404
.
Pattern Explanation
To support a wide range of API structures, I used the regex pattern:
(?:/[^?]+(?:/[^?]+)*)???.+$
This helps match these formats:
/api/PathQueryparameter?tag=value
/api/PathQueryparameter/device/123?type=A
/api/PathQueryparameter/a/b/c?x=1
But crucially, it does not allow:
Static endpoints like
/api/PathQueryparameter
Other HTTP verbs unless explicitly allowed
This helps enforce fine-grained control, such as:
Allowing
GET
for an endpoint but denyingPOST
unless statedBlocking incomplete or malformed routes
Real-World Use Case: Multi-Client Shared Backend
In our project, both Client A and Client B share the same backend code. But their endpoint needs differ. Instead of bloating our application logic with if-client-A
conditions, we used the NGINX whitelist strategy.
This let us:
Isolate API access for each client
Avoid exposing irrelevant APIs
Deploy faster without fear of leaking internal routes
Pro Tips and Challenges
1. Complex Endpoints Need Careful Regex
Get comfortable with regular expressions.
Test them using tools like regex101.
2. Always Include a Default Rule
- Use
default 0;
in themap
block to explicitly block everything else.
3. Log Blocked Requests
- For visibility, consider logging requests that hit the
404
rule:
error_log /var/log/nginx/restricted_requests.log notice;
4. Automate Whitelist Generation
- If you have many routes, consider generating this config from a JSON/YAML list of allowed endpoints + methods.
Benefits Recap
Implementing an API whitelist at the NGINX level gave us:
Better access control with minimal backend changes
Audit-friendly design, everything allowed is declared in one place
Safer deployments, no accidental endpoint exposure
Multi-tenant readiness, client isolation made easy
It's a lightweight, secure, and easily maintainable enhancement to API management especially when combined with authentication, rate limiting, and logging.
If you're managing APIs behind NGINX, I highly recommend giving this method a try. It’s clean, efficient, and gives you complete peace of mind.
Let’s Connect
Got questions about NGINX API whitelisting or want to implement it in your stack?
Whether you're just getting started or scaling for production, I’d love to help.
Reach Me At
Email: officialdeepmodak@gmail.com
LinkedIn: Deep Modak
Read More: Check out my other blog on NGINX
Building secure APIs shouldn't be complicated, feel free to reach out, collaborate, or just say hi!
Subscribe to my newsletter
Read articles from Deep Modak directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Deep Modak
Deep Modak
I am a software developer with a passion for creating efficient and scalable solutions. Currently, I'm working with .NET and Angular, focusing on backend and frontend development, and striving to improve my problem-solving and coding skills. I enjoy learning new technologies and contributing to innovative projects.