Configuring Nginx web server

Nginx is one of the most popular web servers in the world. If you don’t know what a web server is, then it may be simply defined as a service application running either in your local machine or in the server, which delivers web contents (HTML) and also listens and responds to HTTP requests. Some other popular web servers includes Apache, OpenLiteSpeed, IIS, etc.
We will be focusing on Nginx web server particularly in this one. It is a very versatile web server, which can act as a reverse proxy and a load balancer as well. We will discuss about these features in a bit.
Installation
Nginx package is available in most of the platforms and most package managers has it (including Docker Hub), but today we will be installing it in Debian and Debian based distros. Below are the instructions to install Nginx in some of the most popular distros:
apt:
sudo apt install nginx -y
pacman:
sudo pacmasn -Sy nginx
dnf:
sudo dnf install nginx -y
yum:
sudo yum install nginx -y
Configuration:
In most Linux distros, configurations for nginx is usually located under /etc/nginx
. You can locate the nginx.conf
file and take a look at the default settings. You can see that the nginx service can be run as a particular user using the user
variable, followed by a specific user. Like wise, you can see many different configs you can modify within this file. Any modifications made in this file will reflect the performance of the whole nginx webserver.
The server block inside the nginx.conf
will reflect all the servers host within this webserver. You can configure global settings for nginx here in the nginx.conf
file.
Generic Web Server
To use nginx as a web server, it requires the most basic setup. By defaullt, nginx will serve the index.html
file located at /var/www/html
. This default configuration may vary among various Linux distributions or operating systems. You can specify which directory must be used by nginx to serve the web contents in the nginx.conf
file.
Alternatively, if you want to serve multiple servers, then you can create a separate directory like Debian/Ubuntu based distros did:
-------/etc/nginx/
|
|--sites-available/
|--sites-enabled/
All the available configs for server blocks are put in the sites-available
directory with a .conf
extension. Then, if you have any configs you want to include or enable, then you must simply symlink the desired .conf
file from sites-available
to sites-enabled
directory.
Then, you must put this line below in the nginx.conf
file in order for nginx to load or include the server config file you have just enabled:
include /etc/nginx/sites-enabled/*.conf
The above line will include all the .conf
files inside the sites-enabled
directory. We did not include the sites-available
directory because that directory is for storing all our available configs, including the ones we do not currently use. You can enable any of them by symlinking it into the sites-enabled
directory.
In Debian and Debian based distros, there also used to be a separate directory for available modules: modules-available
and modules-enabled
. It works the same as the directory structure of the above. modules-enabled
is where the currently enabled and used modules will reside, and modules-available
will be where we store all our available modules. Then you just add this line to your nginx.conf
file:
include /etc/nginx/modules-enabled/*.conf
You can do the same for all sorts of configurations. For example, let’s say you want security related configs to reside inside one directory. Then you must simply make a directory called security-conf
or any name you want, and then you put all the security related config s inside this directory. And finally, you need to include this in the nginx.conf
file like so:
include /etc/nginx/security-conf/*.conf
Remember that you need to change the name of the directory accordingly if you do not use the same name as mine.
We’ll take a look at some default configs to better understand how we can customize and modify our server configurations below:
Default Nginx config:
# /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
## Basic Settings
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
## SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
## Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
## Gzip Settings
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
## Virtual Host Configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
We will discuss how this basic config works and what they do:
user
- Nginx will run its process on behalf of this user.
worker_process
- This specififes the number of processes to be spawned by Nginx.
pid
- Process ID for the nginx service.
include
- This directive can be used to include any configurations into this file so that it may be loaded by Nginx.
(You can see that we also include some other configurations from other directories in the end of the http
block).
The events
block specifies how the Nginxw service will run. The worker_connections
variable under this events
block specifies the number of connections allowed per process.
You can calculate the maximum no. of clients you can serve:
max clients = worker_process * worker_connections
multi_accecpt
can be set to on
or off
. When set to off
a worker process will accept one new connection at a time, and if set to on
, the worker process will accept all new connections at a time.
Looking at the http
block, there is so much happening, so we will discuss what each parameter does.
NOTE: Keep in mind that this is just an example config that comes with nginx when you install it. You can take a look at the Nginx Documentation for more detailed explanation on how to configure your server.
In the http
block we have sendfile
and tcp_nopush
both set to on, this is on by default and it has to do with sending the response header and the beginning of a file in one packet on Linux and FreeBSD.
server_tokens
must be set to off
because that displays you are using Nginx when getting error pages like 404 and it also displays your nginx version. We don’t want that.
Then, we specify which SSL protocols we prefer using the ssl_protocol
variable, and we enable most of them by default. And on the next one, we specify where we will store our logs: access_log
will specify the path where all the access data will be logged into, and then error_logs
will specify where all the error logs will be stored.
Since this is the main Nginx config file, we will also include all the entries in the conf.d
and sites-enabled
so that all entries inside those directories may be loaded as well. This is a good way of modularizing your configurations, especially if you have a lot of separate configs for plugins, modules or servers. For example, you can create security.conf
under conf.d
directory for a separate security rule. And one important thing to know is that all configurations under conf.d
and sites-enabled
overrides the main nginx.conf if you have any parameters that are identical. I will explain this using a sample server config below:
Sample Server Config:
# /etc/nginx/sites-available/example.conf
server {
listen 80 default_server;
root /var/www/html/public;
# Add index.php to the list if you are using PHP
index index.php index.html;
server_name example.com;
client_max_body_size 50M;
location / {
#try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?$query_string;
}
# pass PHP scripts to FastCGI server
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
# With php-cgi (or other tcp sockets):
#fastcgi_pass 127.0.0.1:9000;
}
access_log /var/log/nginx/example.com/access.log combined buffer=512k flush=1m;
error_log /var/log/nginx/example.com/error.log warn;
}
In the above example server config, we have one big server {}
block that contains many parameters. We will be looking through them to have an idea of what it does:listen
tells the server to listen to a specific port. In this case: 80.
root
specifies the project root folder - Where your index.php
or index.html
lives.
index
is clear with the name itself, but don’t forget to add index.php
if you use PHP.
server_name
is the domain name you associate with this current server.
client_max_body_size
is the max amount of payload the request can handle.
location
blocks are specific rules you can set for those said locations. In this case we specify /
, and try_files
tells the server to try for the given URI, and then if not found, will fall back to /index.php?$query_string
thiis is basically for PHP since their parameters are different.
The lower location
block is for PHP codes. By default, nginx does not come with any PHP modules, so we usually rely on PHP-FPM, which runs PHP processes in the background. We are basically telling nginx to look for PHP-FPM socket.
Then, in the acess_log
and error_log
, we can see that it is different from the default nginx.conf
. We have a separate directory for example.com
under /var/log/nginx/
and a separate access.log
and error.log
file inside it as well. This is done just for the sake of modularity and also this overrides the default config, which means, all logs for example.com
will be stored under /var/log/nginx/example.com/*.log
.
Like wise, you can override any default config for a separate server using a separate config file. But you must remember that in order to enable this example config, we need to symlink it to sites-enabled
directory. It is because in the main nginx.conf
file, we only include the sites-enabled
.
Use the following command to symlink your new config file:
sudo ln -s /etc/nginx/sites-available/<your_filename>.conf /etc/nginx/sites-enabled/
Then, you can check if your configurations are okay using this command: nginx -t
If it outputs something like ok
, then you are good to go. You can restart the Nginx service using the following command:
sudo systemctl restart nginx
There.. now you successfully enable your server.
Load Balancer + Reverse Proxy
If you are hosting a web application or anything that has lots of traffic and you want to set up mutiple servers and a load balancer for more optimized performance and distributing the request loads, then Nginx can handle this. You just need to do some configurations according to you requirements like this:
Sample Server config with load balancing:
# /etc/nginx/sites-available/load-balancer-example.conf
# Define a list of your server clusters first
upstream clusters {
server localhost:8080;
server localhost:8081;
server localhost:8082;
server localhost:8083;
}
# This is the normal; server block as we see in sites-available
server {
listen 80 default_server;
root /var/www/html/public;
server_name _;
location / {
try_files $uri $uri/ =404;
}
# Then you can set a specific `proxy_pass` in your location block
location /app {
proxy_pass http://clusters/app;
}
}
In the above sample Nginx config, we have our upstream
block, which defines the list of servers and the port they listen on. In our case, we have 4 instances of our server, listening to different ports. These are the list of servers that we will include in our load balancer.
Then, inside our server block, we will define a separate endpoint using the location
block; ( in our case, /app
), to communicate with our backend servers. Also, by using the proxy_pass
directive, we are effectively reverse-proxying all those services in the upstream block to the /app
endpoint.
You can define more complex configurations and reverse proxy/load balancer rules according to your specific needs, but those I have mentioned above covers all the basic functionalities of Nginx as a generic web server, load balancer and a reverse proxy.
Conclusion
Thank you for taking the time to read this article and I hope you gain some useful insights for your web server configuration. I will dive deeper into specific use cases and try to articulate a more detailed insight on how to configure a load balancer and such. Nginx is a very light weight, versatile and useful tool especially in the development ecosystem, for System Admins and developers as well. Learn more about Nginx here.
Subscribe to my newsletter
Read articles from Lalrinfela Pachuau directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Lalrinfela Pachuau
Lalrinfela Pachuau
System Administrator for Lailen Consulting Pvt. Ltd