Why NGINX Still Powers the Modern Web in 2025: Part 1


Introduction
NGINX has revolutionized modern web infrastructure, becoming the backbone of high-performance applications worldwide. In this part 1 you'll learn about NGINX fundamentals configurations, theoretical knowledge with real-time scenario examples.
Originally developed by Igor Sysoev in 2004, NGINX was created to solve the infamous C10K problem and has since evolved into one of the most powerful and widely adopted web servers in the world.
What is Forward Proxy and Reverse Proxy?
Understanding proxy servers is fundamental to grasping NGINX's core functionality, as it excels primarily as a reverse proxy server.
Forward Proxy:
A forward proxy acts as an intermediary between clients and the internet, sitting on the client side of the network.
Key Characteristics:
Acts on behalf of the client
Sits between client and the public internet
Forwards client requests to servers
Server doesn't know which specific client made the request
Primarily serves client needs
Real-World Example: Corporate networks use forward proxies to:
Filter employee internet access
Block social media and non-work websites
Cache frequently accessed content to save bandwidth
Provide anonymity for internal users
Monitor and log internet usage
Reverse Proxy:
A reverse proxy sits between clients and backend servers, acting on behalf of the server infrastructure.
Key Characteristics:
Acts on behalf of the server
Sits between internet clients and backend servers
Hides server implementation details from clients
Distributes incoming requests across multiple backend servers
Provides additional services like SSL termination, caching, and load balancing
Real-World Example: Netflix, Amazon, and Google use reverse proxies to:
Distribute user requests across thousands of servers worldwide
Cache popular content closer to users
Terminate SSL connections at the edge
Provide DDoS protection and security filtering
Ensure high availability and fault tolerance
What is DMZ?
Understanding NGINX: Architecture & Use Cases
What is NGINX?
NGINX (pronounced "engine-x") is a widely used open-source tool that does much more than just serve web pages. Known for its speed and reliability, it also works as a reverse proxy, load balancer, and caching server. Whether you're streaming media, handling email protocols like SMTP or IMAP, or routing HTTP and TCP traffic, NGINX is built to handle it all with efficiency.
Core Capabilities:
Web Server: Serving static and dynamic content with minimal resource usage
Reverse Proxy: Forwarding client requests to backend application servers
Load Balancer: Distributing incoming traffic across multiple backend servers
HTTP Cache: Storing frequently requested content to reduce backend load
SSL/TLS Termination: Handling encryption/decryption at the network edge
Mail Proxy: Managing SMTP, POP3, and IMAP protocol connections
Stream Proxy: Handling TCP and UDP traffic for various applications
What problem it solves?
NGINX was specifically designed to solve the C10K problem - the challenge of handling 10,000 (or more) concurrent client connections on a web server efficiently.
The Traditional Problem: Before NGINX, traditional web servers like Apache used a process-per-connection or thread-per-connection model:
This approach became unsustainable as:
Each connection consumed significant memory (8-12MB per process)
Context switching between processes became expensive
System resources were quickly exhausted
Performance degraded dramatically under high load
NGINX's Solution: NGINX was built to solve this problem using an asynchronous, event-driven model, making it lightweight and able to handle tens of thousands of connections simultaneously efficiently and reliably.
Why NGINX Became Essential:
The traditional web servers like Apache used a process-per-connection model, which became inefficient as web traffic grew exponentially. Each connection required a separate process or thread, consuming significant memory and CPU resources. NGINX's innovative approach changed this paradigm entirely.
NGINX Architecture: The Process Model
NGINX uses an event-driven, asynchronous architecture that sets it apart from traditional web servers and consists of several components such as**:**
Master process – Controls the main NGINX instance. It manages configuration, and is responsible for starting, stopping, and supervising the worker processes.
Worker processes – Handle all the actual work: managing client connections, serving static content, proxying requests, load balancing, and SSL/TLS termination.
Cache loader – Runs at startup to load cache metadata from disk into memory, making cached content immediately available after NGINX boots.
Cache manager – Runs in the background at intervals to check the cache directory, remove expired data, and ensure disk usage stays within limits.
Shared memory – Provides inter-process communication and storage for shared state, such as cache indexes, rate limiting counters, and load-balancing information.
Why This Architecture Matters:
Memory Efficiency: One worker can handle thousands of connections with minimal memory overhead
CPU Efficiency: No context switching between processes for each request
Scalability: Performance degrades gracefully under high load
Stability: If a worker crashes, the master spawns a new one without affecting other connections
Resource Optimization: Efficient use of system resources leads to better overall performance
Core Use Cases
NGINX's versatility makes it suitable for numerous deployment scenarios:
1. Web Server: Serving static content (HTML, CSS, JS, images) with minimal overhead
2. Reverse Proxy: Forwarding requests to backend applications
NGINX Alternatives
While NGINX is widely used, a few alternatives are worth knowing:
Apache HTTP Server – A long-standing web server with strong legacy support and a rich module ecosystem, but less efficient under heavy load compared to NGINX.
HAProxy – Specializes in high-performance load balancing and traffic distribution. Great for reliability, but not designed to serve static content.
Traefik – A modern, cloud-native reverse proxy with built-in support for containers, service discovery, and automatic SSL management.
Cloudflare (as a Service) – A managed CDN and security platform offering DDoS protection, WAF, and global content delivery but relies on a third-party provider.
Setting Up Nginx with Docker: Hands-On
Prerequisites
Before we begin, ensure you have Docker installed on your system. You can download it from Docker's official website.
Step 1: Setting Up the Docker Container
Let's start by creating and running an Ubuntu container with Nginx:
docker run -it --name nginx-docker -p 8080:80 ubuntu
Command Breakdown:
-name nginx-docker
→ Assigns a custom name to your containerp 8080:80
→ Maps your host machine's port 8080 to the container's port 80it ubuntu
→ Creates an interactive terminal session with Ubuntu
This command will download the Ubuntu image (if not already present) and start an interactive container session.
Step 2: Installing Nginx and Essential Tools
Once inside the container, update the package list and install Nginx:
# Update package repositories
apt update
# Install Nginx web server
apt install nginx -y
# Install vim text editor (useful for editing config files)
apt install vim -y
After running the apt install vim -y
command first choose number 5
which is Asia and then choose 44
It’s Kolkata, we are choosing these to install IST timezone while installing the vim
tool.
Pro Tip: Always run apt update
first to ensure you're installing the latest versions of packages.
Step 3: Verify Nginx Installation
Check if Nginx was installed successfully:
nginx -v
or
nginx -V
You should see output similar to: nginx version: nginx/1.18.0 (Ubuntu)
Step 4: Understanding Nginx Directory Structure
After installation, all Nginx files are stored in /etc/nginx
. Let's explore this directory:
cd /etc/nginx
ls -la
You'll see a structure like this:
drwxr-xr-x 8 root root 4096 Aug 19 13:38 ./
drwxr-xr-x 1 root root 4096 Aug 19 13:36 ../
drwxr-xr-x 2 root root 4096 May 27 10:28 conf.d/
-rw-r--r-- 1 root root 1125 Dec 1 2023 fastcgi.conf
-rw-r--r-- 1 root root 1055 Dec 1 2023 fastcgi_params
-rw-r--r-- 1 root root 5465 Dec 1 2023 mime.types
drwxr-xr-x 2 root root 4096 May 27 10:28 modules-available/
drwxr-xr-x 2 root root 4096 May 27 10:28 modules-enabled/
-rw-r--r-- 1 root root 1446 Aug 19 13:37 nginx.conf
-rw-r--r-- 1 root root 636 Dec 1 2023 scgi_params
drwxr-xr-x 2 root root 4096 Aug 19 13:24 sites-available/
drwxr-xr-x 2 root root 4096 Aug 19 13:24 sites-enabled/
drwxr-xr-x 2 root root 4096 Aug 19 13:24 snippets/
Key Files and Directories:
nginx.conf
→ Main configuration file (most important!)sites-available/
→ Contains individual site configurationssites-enabled/
→ Contains symlinks to active site configurationsconf.d/
→ Additional configuration files
Step 5: Managing Nginx Service
Starting Nginx
To start the Nginx service:
service nginx start
When to use: After installation or when Nginx has been stopped.
Checking Nginx Status
To verify Nginx is running:
service nginx status
Alternative method to check running processes:
ps aux | grep nginx
You should see output showing:
nginx: master process
(main Nginx process)nginx: worker process
(handles actual requests)
Stopping Nginx
When you need to stop Nginx:
service nginx stop
When to use: During maintenance, troubleshooting, or when shutting down your server.
Step 6: Testing Your Nginx Installation
Since we mapped port 80 to 8080, open your web browser and navigate to:
http://localhost:8080
You should see the default Nginx Welcome Page!
Important Note: Nginx runs on port 80 by default, but since we're using Docker, we've mapped container port 80 to host port 8080. Make sure no other service is using port 8080 on your host machine.
Step 7: Creating a Custom Configuration
Now let's customize Nginx with our own configuration.
Backup the Original Configuration
Always backup before making changes:
cd /etc/nginx
mv nginx.conf nginx-backup.conf
Create New Configuration File
You can either create an empty file first or directly edit:
# Option 1: Create empty file then edit
touch nginx.conf
vim nginx.conf
# Option 2: Directly create and edit
vim nginx.conf
Add Custom Configuration
Insert the following content into your new nginx.conf
:
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name _;
location / {
return 200 "Hello from Nginx Custom Configuration via Docker\\n";
add_header Content-Type text/plain;
}
}
}
How to exit the vim editor: After pasting this, press esc
and then shift + :
and the press x
and press enter
Configuration Breakdown:
events
→ Defines connection processing parametersworker_connections 1024
→ Maximum connections per worker processhttp
→ Main HTTP contextlisten 80
→ Port Nginx listens onserver_name _
→ Catch-all server namelocation /
→ Handles all requests to root pathreturn 200
→ Returns HTTP 200 status with custom message
Step 8: Testing and Reloading Configuration
Test Configuration Syntax
Before applying changes, always test the configuration:
nginx -t
You should see something like this:
Reload Configuration
If the test passes, reload Nginx to apply changes:
nginx -s reload
Why reload instead of restart?
reload
→ Applies new configuration without dropping existing connectionsrestart
→ Stops and starts Nginx, dropping all connections
Verify Your Changes
Visit http://localhost:8080
again. You should now see your custom message: "Hello from Nginx Custom Configuration via Docker"
Common Commands Summary
Here's a quick reference of essential Nginx commands:
# Service management
service nginx start # Start Nginx
service nginx stop # Stop Nginx
service nginx status # Check status
service nginx restart # Full restart
# Configuration management
nginx -t # Test configuration
nginx -s reload # Reload configuration
nginx -s stop # Graceful stop
nginx -s quit # Graceful shutdown
# Information
nginx -v # Show version
nginx -V # Show version and compile options
In the next parts of this nignx blogs we’ll learn:
Now that you have a basic Nginx setup running, you can explore:
Serving static files
Setting up reverse proxy
SSL/TLS configuration
Load balancing
Custom error pages
Conclusion
You've successfully set up Nginx in a Docker container, learned how to manage the service, and created your first custom configuration. This foundation will serve you well as you continue your journey with web servers and containerization.
Remember: Always test your configurations before applying them, and keep backups of working configurations. Happy learning!
Thanks for reading…
Subscribe to my newsletter
Read articles from Suraj directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Suraj
Suraj
I'm a Developer from India with a passion for DevOps ♾️ and open-source 🧑🏻💻. I'm always striving for the best code quality and seamless workflows. Currently, I'm exploring AI/ML and blockchain technologies. Alongside coding, I write Technical blogs at Hashnode, Dev.to & Medium 📝. In my free time, I love traveling, reading books