Nginx - How to use

Commands

  • For Installing:

sudo apt update

sudo apt upgrade

sudo apt install nginx

  • For starting:

sudo systemctl start nginx

sudo systemctl enable nginx

  • For checking the status:

sudo systemctl status nginx

  • For reloading anytime, any config file has changed:

sudo systemctl reload nginx

  • For restarting:

sudo systemctl restart nginx

  • Test the Nginx configuration for syntax errors:

sudo nginx -t

Uses

Nginx serves many purposes, some of them are:

  1. Web Server

    Default Configurations

    After installing Nginx, open your web browser and navigate to http://your_server_ip. You should see the Nginx default welcome page.

    Default page configurations can be seen at → /etc/nginx/sites-available/default

     server {
     listen 80;  # Listen on port 80, the default HTTP port
     server_name localhost;  # The server name, here it is set to localhost
    
     # root and index are directives or keywords
     root /var/www/html;  # The root directory where files are served from
     index index.html index.htm index.nginx-debian.html;  # The default files to serve
    
     location / {
         try_files $uri $uri/ =404;  # Try to serve the requested URI, if not found return a 404
         }
     }
    
    • We can serve static files using this process.

    • use <domain_name> instead of localhost for serving the index.html page when opening the <domain_name>.

How to serve static files on different domain names

1️⃣ Create a new config file

    sudo nano /etc/nginx/sites-available/practice1

Add this:

    server {
        listen 80;
        server_name practice1.heysohail.me;

        root /var/www/html/practice1;
        index index.html;

        location / {
            try_files $uri $uri/ =404;
        }
    }

2️⃣ Create the directory and make your index.html there

    sudo mkdir -p /var/www/html/practice1
    sudo vim /var/www/html/practice1/index.html

3️⃣ Enable the config

    sudo ln -s /etc/nginx/sites-available/practice1 /etc/nginx/sites-enabled/

4️⃣ Restart Nginx

    sudo systemctl restart nginx

Now, practice1.heysohail.me will serve files from /var/www/html/practice1/.

  1. Reverse Proxy

/etc/nginx/
│── nginx.conf  # Main config file
│── sites-available/
│   ├── default  # Default site config
│   ├── [api.example.com](<http://api.example.com/>)  # Separate config for API
│   ├── [www.example.com](<http://www.example.com/>)  # Separate config for frontend
│── sites-enabled/
│   ├── default -> ../sites-available/default (Symlink)
│   ├── [api.example.com](<http://api.example.com/>) -> ../sites-available/api.example.com (Symlink)
│   ├── [www.example.com](<http://www.example.com/>) -> ../sites-available/www.example.com (Symlink)

Priority:

  • The main nginx.conf file is the first to be loaded by Nginx when it starts up. It includes general settings for Nginx, such as global configurations, event settings, and the HTTP block where you can define server-level configurations.

  • sites-enabled/ files are included inside the nginx.conf file. The nginx.conf file typically has a line like this: include /etc/nginx/sites-enabled/*;

  • This means that the configuration files inside sites-enabled/ are loaded after nginx.conf is processed, and they are more specific to particular domains or applications, which override general settings defined in nginx.conf.

    1. nginx.conf file

      events {
          worker_connections 1024;
      }
    
      http {
          server {
              listen 80;
              server_name api.example.com;
    
              location / {
                  proxy_pass <http://localhost:3000>;
                  proxy_set_header Host $host;
                  proxy_set_header X-Real-IP $remote_addr;
              }
          }
    
          server {
              listen 80;
              server_name www.example.com;
    
              location / {
                  proxy_pass <http://localhost:5173>;
                  proxy_set_header Host $host;
                  proxy_set_header X-Real-IP $remote_addr;
              }
          }
      }
    

    Summary of Nginx Configuration & How It Works

    1️⃣ DNS Configuration


2️⃣ How Nginx Resolves Requests

  • The request reaches Nginx, which checks the Host header in the request.

  • Based on server_name, Nginx forwards the request to the correct backend service.

📌 Resolution Process:

  1. User requests http://api.example.com

  2. User requests http://www.example.com

🔹 Key Directives:

  • listen 80; → Listens for HTTP requests on port 80.

  • server_name api.example.com; → Handles API requests.

  • server_name www.example.com; → Handles frontend requests.

  • proxy_pass → Forwards requests to the correct backend service.

  • proxy_set_header Host $host; → Passes the original domain name to the backend.


3️⃣ Final Request Flow

    Client Request → DNS Resolves to Server IP → Nginx Reads Host Header
    → Matches server_name → Forwards to Correct Backend (3000 or 5173) → Response Sent Back

2. sites-available/<domain_name> file

  • Instead of defining everything inside nginx.conf, each site gets its own configuration file in sites-available/.

  • A symbolic link is created in sites-enabled/ to activate the configuration.

Example Configuration for Each Site

1️⃣ You would create separate config files in /etc/nginx/sites-available/.

  1. API Configuration (/etc/nginx/sites-available/api.example.com)
    server {
        listen 80;
        server_name api.example.com;

        location / {
            proxy_pass <http://localhost:3000>;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
  1. Frontend Configuration (/etc/nginx/sites-available/www.example.com)
    server {
        listen 80;
        server_name www.example.com;

        location / {
            proxy_pass <http://localhost:5173>;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }

2️⃣ Enabling the Sites

After creating these files, you need to enable them using symbolic links:

    sudo ln -s /etc/nginx/sites-available/api.example.com /etc/nginx/sites-enabled/
    sudo ln -s /etc/nginx/sites-available/www.example.com /etc/nginx/sites-enabled/

Then, restart Nginx for the changes to take effect:

    sudo systemctl restart nginx

3️⃣ How Requests Are Handled in This Method

  1. User requests http://api.example.com

  2. User requests http://www.example.com

🚀 Same outcome as before, but configuration is modular and easier to manage!

4️⃣ Final Request Flow

    Client Request → DNS Resolves to Server IP → Nginx Reads sites-enabled Config
    → Matches server_name → Forwards to Correct Backend (3000 or 5173) → Response Sent Back
  1. Rate Limiter

Nginx provides a simple way to add rate limiting using the limit_req_zone and limit_req directives.

1️⃣ Edit the main configuration file /etc/nginx/nginx.conf to define a rate limit zone.

2️⃣ Add the following to the http block:

    http {
        limit_req_zone $binary_remote_addr zone=mylimit:10m rate=2r/s;
        ...
    }

The directive limit_req_zone is used in Nginx to define a shared memory zone that will be used to store the state of rate limits for incoming requests. Here’s a breakdown of the specific directive you provided:

  • $binary_remote_addr: This is a variable that holds the client’s IP address in a binary format. Using the binary format saves memory, which is important when dealing with large numbers of requests.

  • zone=mylimit:10m: This specifies the name and size of the shared memory zone used to store the state of rate limits. A 10MB zone can typically store about 160,000 states (given that each state takes about 64 bytes).

  • rate=2r/s: Each IP address is allowed to make 2 requests per second.

    3️⃣ Edit your server block configuration /etc/nginx/sites-available/default to apply the rate limit:

server {
    ...
    location / {
        limit_req zone=mylimit burst=20 nodelay;
        try_files $uri $uri/ =404;
    }
    ...
}
  • zone=mylimit applies the rate limiting defined by the mylimit zone.

  • The burst allows temporary spikes in traffic while still enforcing the limit over time. burst=20 allows a burst of up to 20 requests beyond the defined rate. So, even if the rate limit is set to 2 requests per second, the burst allows up to 20 requests to be made in a second.

  • nodelay means that requests that exceed the rate limit should be rejected immediately rather than delayed.

    4️⃣ Test the rate limiting:

      ab -n 25 -c 5 http://your_server_ip/
    

    This simulates 25 requests with 5 concurrent connections to see rate limiting in action.

    1. Load Balancer

Different Modes of LB

  • Round-Robin (Default): Evenly distributes requests across servers; no configuration needed.

  • Weighted Round-Robin: Assigns weights to servers to handle traffic proportionally (server 127.0.0.1 weight=3;).

  • Least Connections: Sends requests to the server with the fewest active connections (least_conn).

  • IP Hash: The server to which a request is sent is determined from the client IP address. In this case, either the first three octets of the IPv4 address or the whole IPv6 address are used to calculate the hash value. The method guarantees that requests from the same address get to the same server unless it is not available (ip_hash).

Sample Code (Round-Robin)

    http {

            upstream backend_servers {
                    server 127.0.0.1:3000;
                    server 127.0.0.1:3001;
                    server 127.0.0.1:3002;
            }

        server {
            listen 80;
            server_name api.example.com;

            location / {
                proxy_pass http://backend_server;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
            }
        }

Weighted Round Robin

    http {

            upstream backend_servers {
                    server 127.0.0.1:3000;
                    server 127.0.0.1:3001;
                    server 127.0.0.1:3002 weight=5;
            }

        server {
            listen 80;
            server_name api.example.com;

            location / {
                proxy_pass http://backend_server;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
            }
        }

Least Connections

    http {

            upstream backend_servers {
                    least_conn;
                    server 127.0.0.1:3000;
                    server 127.0.0.1:3001;
                    server 127.0.0.1:3002 weight=5;
            }

        server {
            listen 80;
            server_name api.example.com;

            location / {
                proxy_pass http://backend_server;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
            }
        }

IP Hash

    http {

            upstream backend_servers {
                    ip_hash;
                    server 127.0.0.1:3000;
                    server 127.0.0.1:3001;
                    server 127.0.0.1:3002 weight=5;
            }

        server {
            listen 80;
            server_name api.example.com;

            location / {
                proxy_pass http://backend_server;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
            }
        }

If one of the servers needs to be temporarily removed from the load‑balancing rotation, it can be marked with the down parameter in order to preserve the current hashing of client IP addresses. Requests that were to be processed by this server are automatically sent to the next server in the group:

    upstream backend_servers {
                    ip_hash;
                    server 127.0.0.1:3000;
                    server 127.0.0.1:3001;
                    server 127.0.0.1:3002 down;
            }

Backup Servers

They will only be used in case of Primary servers failure.

    upstream backend_servers {
                    server 127.0.0.1:3000;
                    server 127.0.0.1:3001;
                    server 127.0.0.1:3002 backup;
            }
  1. Cache

     proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=60m;
    
     server {
             listen 80;
           server_name api.example.com;
    
             # CACHED PATH
             location ~ /blog/(.*)+/(.*)$ {
                     proxy_pass <http://example.com>;
                     proxy_cache cache;
                     proxy_cache_valid any 10m;
                     proxy_cache_methods GET HEAD;
                     add_header X-Proxy-Cache $upstream_cache_status;
             }
    
             location / {
                     proxy_pass <http://example.com>;
                     proxy_set_header Host $host;
                 proxy_set_header X-Real-IP $remote_addr;
             }
     }
    

    Declaration

    • proxy_cache_path sets the path which will be used to save the cached data.

    • levels sets the number of subdirectory levels in cache

    • keys_zone=cache:10m is defining a shared memory zone named cache with maximum size 10 MB.

    • inactive means that an unused cache will be deleted after that amount of time.

Location Block

Only requests that satisfy the regex /blog/(.*)+/(.*)$ will be passed to this particular code block and will be cached.

  • proxy_cache declares to use the caching

  • proxy_cache_valid sets cached data to be stale after an amount of time

  • proxy_cache_methods tells Nginx to cache only GET and HEAD requests

  • add_header will add an attribute X-Proxy-Cache to the header of the request response.

0
Subscribe to my newsletter

Read articles from Md Sohail Ansari directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Md Sohail Ansari
Md Sohail Ansari

Final Year Undergrad at IIIT Bhagalpur and a Full Stack Web Developer. Portfolio: https://www.heysohail.me/