How to Deploy Your Dockerized Web App to a Linux VPS

Table of contents
- 1. Prepare the User and Directory for Your Application
- 2. Install Docker and Docker Compose
- 3. Install Docker Compose (Legacy Standalone Binary - If not using Docker CE plugin)
- 4. Create an SSH Key for Your GitHub Repository
- 5. Inject Environment Variables
- 6. Deploy Your Docker Compose Application
- 7. Configure Your Website for Internet Access (Nginx Reverse Proxy)
- 8. Implement SSL with Certbot (Let's Encrypt)
- 9. Fix the "Welcome to nginx!" Message or Incorrect Site Loading
- Conclusion:

Deploying a web application, especially a containerized one, can seem daunting. This guide will walk you through the process of taking your Dockerized web application from your development environment and deploying it onto a Linux Virtual Private Server (VPS). If you haven't set up your VPS yet, follow these guidelines VPS Setup Essentials for Your Web App
This article covers everything from preparing your server environment and installing Docker to setting up Nginx as a reverse proxy and securing your application with SSL using Let's Encrypt. By the end of this article, you'll have a clear understanding of each step and why it's crucial for a secure, scalable, and maintainable deployment.
This guide assumes you have:
A Dockerized web application ready for deployment (e.g., an ASP.NET Core application with a
Dockerfile
anddocker-compose.yml
).A Linux VPS (Ubuntu is used in examples).
Basic familiarity with Linux command-line interface.
A registered domain name pointing to your VPS's IP address.
Let's get started!
1. Prepare the User and Directory for Your Application
Security and organization are paramount in a production environment. We will create a dedicated system user and a specific directory structure for your application. This isolation helps prevent unauthorized access to other parts of your system and keeps your deployments organized.
1.1 Create a Dedicated User for Your Website
It's a security best practice to run your web applications under a dedicated, unprivileged user account rather than the root
user. This limits the potential damage if your application is compromised.
Let's create a user called mywebsiteuser
:
sudo adduser mywebsiteuser
Explanation:
sudo
: Executes the command with superuser privileges.adduser
: A high-level utility in Debian-based systems (like Ubuntu) to add a new user. It handles creating the user's home directory, setting up initial permissions, and prompting for a password and other user details.
You'll be prompted to set a password for mywebsiteuser
and provide some optional information. Choose a strong password.
1.2 Create a Web Root Directory for Each App
For consistent management and separation of concerns, it's recommended to place application code under a standardized web root directory, commonly /var/www
. This makes it easier to locate and manage your deployments.
sudo mkdir -p /var/www/mywebsite
Explanation:
mkdir
: Creates a new directory.-p
: Theparents
option ensures that if any parent directories in the path (/var/www
in this case) do not exist, they will be created automatically./var/www/mywebsite
: This is the full path where yourmywebsite
application's files will reside.
1.3 Set Ownership of These Directories
After creating the directory, it's crucial to assign its ownership to the dedicated user you just created. This ensures that only mywebsiteuser
(and root
) can modify the contents of this directory.
sudo chown -R mywebsiteuser:mywebsiteuser /var/www/mywebsite
Explanation:
chown
: Changes the owner and group of files or directories.-R
: Therecursive
option applies the ownership change to all files and subdirectories within/var/www/mywebsite
.mywebsiteuser:mywebsiteuser
: Sets both the owner and the primary group of the directory tomywebsiteuser
.
mywebsiteuser
) can modify their code. This is a fundamental security principle: the principle of least privilege. If your application process is ever compromised, the attacker will be limited to the permissions of mywebsiteuser
, preventing them from easily affecting other parts of your server or other applications.1.4 Set Permissions for the Directory
While chown
sets ownership, chmod
sets file and directory permissions. These permissions dictate who can read, write, or execute files.
sudo chmod -R 755 /var/www/mywebsite
Explanation:
chmod
: Changes file mode bits (permissions).755
: This is an octal representation of permissions:7 (Owner): Read (4) + Write (2) + Execute (1) = 7. The owner (
mywebsiteuser
) has full control over files and directories.5 (Group): Read (4) + Execute (1) = 5. Members of the
mywebsiteuser
group can read and traverse (execute) directories.5 (Others): Read (4) + Execute (1) = 5. Everyone else can read and traverse directories.
Why these permissions? This configuration is standard for web content. The owner needs full control to deploy and manage the application. Others (like the web server process) only need read and execute (traverse) permissions to serve static content or execute scripts. Write access for others is generally not needed and poses a security risk.
1.5 Switch to the Application User When Deploying
When performing operations related to your specific application, such as cloning your Git repository or deploying new code, you should always do so as the mywebsiteuser
. This maintains the security boundaries established.
To switch to the mywebsiteuser
shell:
sudo -i -u mywebsiteuser
Explanation:
sudo -i
: Simulates an initial login, giving you a shell as the target user with their environment variables.-u mywebsiteuser
: Specifiesmywebsiteuser
as the user to switch to.
Once you've switched users, navigate to your application's root directory:
cd /var/www/mywebsite
All subsequent operations (like git clone
, docker-compose up
) that directly interact with your application's files should be performed from this user and directory.
2. Install Docker and Docker Compose
Docker is the cornerstone of this deployment strategy, allowing you to package your application and its dependencies into isolated containers. Docker Compose simplifies the management of multi-container Docker applications.
2.1 Option 1: Install Docker from the Ubuntu Repository (Simple, but potentially older version)
This method uses the Docker package available in Ubuntu's default repositories. It's generally simpler but might provide an older version of Docker.
sudo apt update
sudo apt install -y docker.io
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
Explanation:
sudo apt update
: Refreshes the list of available packages from the repositories.sudo apt install -y docker.io
: Installs thedocker.io
package. The-y
flag automatically answers yes to prompts.sudo systemctl enable --now docker
:systemctl enable docker
: Configures Docker to start automatically on system boot.--now
: Starts the Docker service immediately without requiring a reboot.
sudo usermod -aG docker $USER
: Adds your current logged-in user (the user you are using to SSH into the VPS) to thedocker
group. This is crucial because it allows your current user to run Docker commands without needingsudo
every time.
2.2 Option 2: Install Docker CE (Community Edition) from Official Docker Repository (Recommended)
This is the recommended approach for production environments as it provides the latest stable version of Docker CE, along with containerd
(a core container runtime) and Docker Compose CLI plugin
.
First, update your package index and install necessary utilities:
sudo apt update
sudo apt install ca-certificates curl gnupg -y
Explanation:
ca-certificates
: Allows web browsers and other programs to check the authenticity of SSL/TLS certificates.curl
: A command-line tool for transferring data with URLs. Used here to download Docker's GPG key.gnupg
: GNU Privacy Guard, used for managing cryptographic keys. Needed to verify the authenticity of Docker packages.
Next, add Docker's official GPG key. This key is used to verify the authenticity of packages downloaded from Docker's repository, ensuring they haven't been tampered with.
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Explanation:
sudo install -m 0755 -d /etc/apt/keyrings
: Creates the/etc/apt/keyrings
directory with appropriate permissions if it doesn't exist. This is where APT stores cryptographic keys.curl -fsSL https://download.docker.com/linux/ubuntu/gpg
: Downloads the Docker GPG key.-f
: Fail silently on HTTP errors.-s
: Silent mode (don't show progress meter or error messages).-S
: Show error messages even with-s
.-L
: Follow redirects.
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
: Pipes the downloaded key togpg --dearmor
, which converts it to a format readable by APT, and saves it to the specified file.sudo chmod a+r /etc/apt/keyrings/docker.gpg
: Sets read permissions for all users on the GPG key file.
Now, add the Docker repository to your APT sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
Explanation:
echo "deb ... stable"
: This command constructs the repository entry string.$(dpkg --print-architecture)
: Dynamically gets your system's architecture (e.g.,amd64
).signed-by=/etc/apt/keyrings/docker.gpg
: Specifies the GPG key used to sign packages from this repository.$(. /etc/os-release && echo "$VERSION_CODENAME")
: Dynamically gets your Ubuntu version's codename (e.g.,jammy
for Ubuntu 22.04).stable
: Specifies that you want the stable release channel.
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
: Pipes the output ofecho
totee
.tee
writes to both standard output and the specified file (/etc/apt/sources.list.d/docker.list
). Usingsudo tee
allows writing to a system file, and> /dev/null
suppresses output to the console.sudo apt update
: Updates your package index again to include packages from the newly added Docker repository.
Finally, install Docker Engine, containerd, and Docker Compose (CLI plugin):
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Explanation:
docker-ce
: The Docker Engine Community Edition.docker-ce-cli
: The Docker command-line interface.containerd.io
: A high-level container runtime that Docker Engine uses internally.docker-buildx-plugin
: A Docker CLI plugin for extended build capabilities (e.g., building multi-platform images).docker-compose-plugin
: The new, integrated Docker Compose CLI plugin (accessible viadocker compose
without a hyphen).
Add your user to the docker
group (again, in case you purged it or if this is a fresh setup):
sudo usermod -aG docker ${USER}
Important Note: The $USER
(or ${USER}
) variable expands to the username of the user currently logged in. This command adds your current SSH user to the docker
group.
To apply group membership changes, you MUST log out and log back in to your VPS. This reloads your user's group memberships. Simply running new commands will not reflect the change until a new session is started.
2.3 Test Docker Installation
After logging back in, verify that Docker is correctly installed and that your user can run Docker commands without sudo
:
docker run hello-world
3. Install Docker Compose (Legacy Standalone Binary - If not using Docker CE plugin)
(Note: If you followed Option 2 and installed docker-compose-plugin
, you can skip this section as docker compose
is already available as a Docker CLI subcommand. This section is for situations where you might have installed Docker via Option 1 or prefer the standalone docker-compose
binary.)
While the docker-compose-plugin
is the modern approach, some users might still prefer or require the standalone docker-compose
binary.
First, define where the Docker Compose binary will be stored:
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
Explanation:
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
: This sets theDOCKER_CONFIG
environment variable. IfDOCKER_CONFIG
is already set, it uses its value; otherwise, it defaults to$HOME/.docker
. This is a common location for Docker CLI plugins.mkdir -p $DOCKER_CONFIG/cli-plugins
: Creates thecli-plugins
directory within yourDOCKER_CONFIG
path.
Download the Docker Compose binary:
curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
Explanation:
curl -SL ...
: Downloads the latest stable Docker Compose binary for Linux (x86_64 architecture).-S
: Show error messages.-L
: Follow redirects.
-o $DOCKER_CONFIG/cli-plugins/docker-compose
: Specifies the output file path and name for the downloaded binary.
3.1 Apply Executable Permissions
The downloaded binary needs to be marked as executable to be run as a command.
chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
Explanation:
chmod +x
: Adds execute permission to the specified file.
3.2 Verify the Installation
Confirm that Docker Compose is correctly installed and accessible:
docker compose version # If using the plugin (Option 2)
# OR
docker-compose version # If using the standalone binary (Option 3)
You should see the version number of Docker Compose printed to your console.
4. Create an SSH Key for Your GitHub Repository
When deploying your application, you'll typically pull your code from a Git repository (like GitHub, GitLab, or Bitbucket). Using SSH keys for authentication is more secure and convenient than using HTTPS with a personal access token, especially for automated deployments.
4.1 Change to Your Specific Website User
Crucially, generate the SSH key as the mywebsiteuser
(or whatever dedicated user you created), NOT as the root
user or your primary SSH user. This ensures that the key is associated with the correct user and adheres to the principle of least privilege.
sudo -i -u mywebsiteuser
You are now operating as mywebsiteuser
. Any files created will be owned by this user.
4.2 Create an SSH Key
ssh-keygen -t ed25519 -C "mywebsiteuser_github_key"
Explanation:
ssh-keygen
: The command to generate SSH key pairs.-t ed25519
: Specifies the type of key to create.ed25519
is a modern, highly secure, and efficient elliptic curve algorithm, generally preferred over RSA for new keys.-C "mywebsiteuser_github_key"
: Adds a comment to the public key. This is helpful for identifying the key's purpose when you add it to GitHub or other services.
You will be prompted for:
File in which to save the key: Press Enter to accept the default location (
/home/mywebsiteuser/.ssh/id_ed25519
).Passphrase: Use a passphrase if possible for better security. A passphrase adds an extra layer of protection, requiring you to enter it before the private key can be used. This protects your key if someone gains unauthorized access to your VPS. If you choose not to use one (e.g., for automated deployments where passphrase prompts are problematic), understand the increased risk.
4.3 Verify Key Generation and Permissions
After generation, verify the keys exist and have correct permissions:
ls -al ~/.ssh/
You should see two files: id_ed25519
(your private key) and id_ed25519.pub
(your public key).
Permission Check:
The private key (
id_ed25519
) must have permissions600
(read/write for owner only). This is critical; if others can read your private key, your security is compromised.The public key (
id_ed25519.pub
) should have644
(read for owner, read for group/others).The
.ssh
directory itself should have700
(read/write/execute for owner only).
If permissions are not correct, fix them immediately:
chmod 600 ~/.ssh/id_ed25519
chmod 644 ~/.ssh/id_ed25519.pub
chmod 700 ~/.ssh/ # Ensure the .ssh directory itself is secure
4.4 Copy the Public Key
You need to add your public key to GitHub (or your Git hosting service). Display the public key's content:
cat ~/.ssh/id_ed25519.pub
Copy the entire output, which starts with ssh-ed25519
and ends with the comment you provided.
4.5 Add the Public Key to GitHub
Log in to your GitHub account (your personal one that owns the repository or the organization that manages it).
Navigate to Settings -> SSH and GPG keys -> New SSH key.
Title: Give it a meaningful name, like "VPS
mywebsiteuser
key" or "Production servermywebsite
user".Key: Paste the public key you copied earlier into this field.
Click "Add SSH key".
4.6 Clone the Repository Using the SSH URL
Now, with the SSH key set up and added to GitHub, you can securely clone your repository. Remember to be operating as the mywebsiteuser
.
cd /var/www/mywebsite # Navigate to your app's web root directory
git clone git@github.com:yourgithubuser/yourrepository.git .
Explanation:
git@github.com:yourgithubuser/yourrepository.git
: This is the SSH URL for your GitHub repository. You can find this URL on your repository's page on GitHub (click the "Code" button and select "SSH")..
: The dot at the end instructs Git to clone the repository into the current directory (/var/www/mywebsite
), rather than creating a new subdirectory.
5. Inject Environment Variables
Docker Compose allows you to inject environment variables into your containers, which is a standard way to manage configuration that varies between environments (e.g., database connection strings, API keys). The .env
file is a convenient way to manage these variables locally or on your server.
5.1 Create a .env
File
Navigate to your project's root directory on the VPS (/var/www/mywebsite
):
cd /var/www/mywebsite
touch .env
Explanation:
touch .env
: Creates an empty file named.env
. This file should be placed at the same level as yourdocker-compose.yml
file.
5.2 Add Your Variables to .env
Open the .env
file using a text editor (e.g., nano
or vim
):
nano .env
Inside the .env
file, add your environment variables in a KEY=VALUE
format, one per line. For example:
ASPNETCORE_ENVIRONMENT=Production
CONNECTION_STRINGS__DEFAULTCONNECTION=Server=my_db_server;Database=mydb;User Id=myuser;Password=mypassword;
API_KEY=your_secret_api_key_here
Security Note: The .env
file on your VPS will contain sensitive information. Ensure its permissions are strict (e.g., chmod 600 .env
if it's not already, though touch
usually creates it with appropriate user-only write access by default). Never commit your .env
file to your Git repository! Add it to your .gitignore
file.
6. Deploy Your Docker Compose Application
With Docker installed, the repository cloned, and environment variables configured, you're ready to bring up your application.
6.1 Navigate to Your Project Directory
cd /var/www/mywebsite
6.2 Build and Run with Docker Compose
This command will read your docker-compose.yml
file, build any necessary Docker images, and start your services.
docker-compose up -d --build
Explanation of the command:
docker-compose up
: This command starts the services defined in yourdocker-compose.yml
file. Docker Compose will automatically create networks, volumes, and containers as specified.-d
(detached mode): This is crucial for production deployments. It runs your containers in the background, freeing up your terminal immediately. Without-d
, your terminal would be attached to the container logs, and closing the terminal would stop the containers.--build
: This forces Docker Compose to rebuild your images before starting the containers. This is important if you've made changes to yourDockerfile
or your application's code and want to ensure the latest version is deployed. If you're using pre-built images from a container registry (e.g., Docker Hub, Azure Container Registry), you might omit this flag and usedocker-compose pull
first to download the latest images. For local builds from source code,--build
is essential.
6.3 Verify Your Containers
After running docker-compose up -d --build
, you should verify that your containers are running as expected.
docker ps
Explanation:
docker ps
: Lists all currently running Docker containers.
You should see your web application container (e.g., mywebsite-webapp
) and any other services (like a database container) listed with their STATUS
as Up
and their PORTS
mapping. For example:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<id> mywebsite-webapp "dotnet Mywebsite.dll" 2 minutes ago Up 2 minutes 0.0.0.0:8080->80/tcp mywebsite_webapp_1
Pay close attention to the PORTS
column. This shows which port on your VPS (e.g., 8080
) is mapped to which port inside your container (e.g., 80
). Your Nginx reverse proxy will forward requests to the VPS host port.
6.4 Check Logs (Optional but Recommended for Debugging)
To ensure your application started successfully and to debug any issues, check the container logs:
docker-compose logs -f webapp
Explanation:
docker-compose logs
: Displays the logs from your services.-f
(follow): Streams new logs in real-time, similar totail -f
. This is extremely useful for monitoring startup or diagnosing runtime errors.webapp
: Replacewebapp
with the service name of your web application as defined in yourdocker-compose.yml
file.
Press Ctrl+C
to exit the log stream. Look for messages indicating your application has started listening on its internal port (e.g., "Now listening on: http://[::]:80").
7. Configure Your Website for Internet Access (Nginx Reverse Proxy)
Even though your Docker container is running, it's typically only accessible on the VPS itself (e.g., via localhost:8080
). To make your application available to the internet via your domain name, you need a reverse proxy. Nginx is a popular, high-performance choice for this role. It will sit in front of your Dockerized application, handle incoming web requests, and forward them to your running Docker container.
7.1 Install Nginx
If Nginx is not already installed:
sudo apt update
sudo apt install nginx -y
sudo systemctl enable --now nginx
Explanation:
sudo systemctl enable --now nginx
: Enables Nginx to start on boot and starts it immediately.
7.2 Create Nginx Configuration File
Nginx configurations for individual websites are usually placed in /etc/nginx/sites-available/
. You'll create a new file for your website.
sudo nano /etc/nginx/sites-available/mywebsite
Paste the following configuration. Remember to replace your_domain.com
with your actual domain name and adjust the proxy_pass
port if your Dockerized application isn't listening on port 8080
internally.
# /etc/nginx/sites-available/mywebsite
server {
listen 80;
listen [::]:80; # Listen on IPv6 as well
server_name your_domain.com www.your_domain.com; # Replace with your actual domain(s)
location / {
# Forward requests to your Dockerized application
proxy_pass http://localhost:8080; # Or the IP of your Docker host if not localhost
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; # Crucial for ASP.NET Core apps behind a proxy
proxy_cache_bypass $http_upgrade;
# Disable buffering for WebSockets/Server-Sent Events if your app uses them
# proxy_buffering off;
}
# Optional: Add error pages or other configurations as needed
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# root /usr/share/nginx/html;
# }
}
Explanation of Nginx Directives:
listen 80;
: Nginx will listen for incoming HTTP requests on port 80.listen [::]:80;
: Similar to above, but for IPv6 traffic.server_name your_domain.com www.your_domain.com;
: This tells Nginx which domain names this server block should respond to. It's crucial for Nginx to route requests correctly, especially when hosting multiple sites on one server.location / { ... }
: This block defines how Nginx handles requests for the root URL (/
) and its subpaths.proxy_pass http://localhost:8080;
: This is the core of the reverse proxy. It tells Nginx to forward all requests received by this server block to your Dockerized application, which is running onlocalhost
(the VPS itself) at port8080
(the host port you mapped indocker-compose.yml
).proxy_http_version 1.1;
: Specifies the HTTP protocol version for the proxy connection.proxy_set_header ...
: These directives pass important client information (like original host, real IP address, and protocol scheme) from Nginx to your backend application. This is vital for applications like ASP.NET Core that might need to know the original request's scheme (HTTP vs. HTTPS) or the client's actual IP address, as they see Nginx as the direct client otherwise.
7.3 Create a Symbolic Link
To enable your new Nginx configuration, you need to create a symbolic link from sites-available
(where configurations are stored) to sites-enabled
(where Nginx looks for active configurations).
sudo ln -s /etc/nginx/sites-available/mywebsite /etc/nginx/sites-enabled/
Explanation:
ln -s
: Creates a symbolic link (a shortcut)./etc/nginx/sites-available/mywebsite
: The source file (your configuration)./etc/nginx/sites-enabled/
: The destination directory where the link will be created.
7.4 Test Nginx Configuration
Before restarting Nginx, always test your configuration for syntax errors. This prevents Nginx from failing to start due to a simple typo.
sudo nginx -t
You should see output similar to: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
and nginx: configuration file /etc/nginx/nginx.conf test is successful
. If there are any errors, fix them in your configuration file before proceeding.
7.5 Restart Nginx
Apply the new configuration by restarting Nginx. A restart is needed for Nginx to load the new server block.
sudo systemctl restart nginx
If Nginx fails to restart or you encounter issues, use sudo systemctl status nginx
to view the service's status and error logs for debugging.
7.6 Set Your Domain DNS to Point to Your VPS
This is done outside of your VPS, through your domain name registrar or DNS provider.
Open your domain setting from your domain provider. Log in to your domain registrar's website (e.g., GoDaddy, Namecheap, Cloudflare).
Set the DNS manager. Locate the DNS management section for your domain.
Adjust the domain name of the A record to point to your VPS.
You'll typically create an
A
record for your main domain (e.g.,your_domain.com
) pointing to your VPS's public IPv4 address.You might also create a
CNAME
record forwww.your_domain.com
pointing toyour_domain.com
, or anotherA
record directly to the IP.
Save the changes. DNS changes can take some time to propagate across the internet (anywhere from a few minutes to 48 hours), although typically it's much faster.
Once DNS has propagated, accessing http://your_domain.com
in a browser should now hit your Nginx server, which then forwards the request to your Dockerized application. You might still see "Welcome to nginx!" or experience issues if the default Nginx site is interfering (see troubleshooting below).
8. Implement SSL with Certbot (Let's Encrypt)
Securing your website with HTTPS (SSL/TLS) is non-negotiable for modern web applications. It encrypts communication between your users and your server, protects data integrity, and is a strong ranking signal for search engines. Let's Encrypt provides free, automated SSL certificates, and Certbot is the tool to manage them.
8.1 Install Certbot
Certbot is often installed via Snap, a universal packaging system for Linux.
sudo snap install core
sudo snap refresh core # Ensures Snap core is up to date
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot # Create a symbolic link for easy access
Explanation:
snap install core
: Installs the Snap "core" components, which are foundational for running other snaps.snap refresh core
: Updates the core snap.snap install --classic certbot
: Installs Certbot. The--classic
flag is needed because Certbot requires broad system access.sudo ln -s /snap/bin/certbot /usr/bin/certbot
: Creates a symbolic link so you can runcertbot
directly from your PATH without needing to specify/snap/bin/certbot
.
8.2 Obtain and Install SSL Certificate
Certbot has an Nginx plugin that can automatically configure Nginx for SSL.
sudo certbot --nginx
Explanation:
certbot --nginx
: This command tells Certbot to use its Nginx plugin to configure SSL.
Follow the prompts:
Certbot will ask for your email address for urgent renewal notices and security warnings.
Agree to the Let's Encrypt Terms of Service.
Choose whether to share your email with the Electronic Frontier Foundation (EFF).
Certbot will detect your Nginx configurations and list the domains it found. Select the numbers corresponding to
your_domain.com
andwww.your_domain.com
.It will ask if you want to redirect HTTP traffic to HTTPS. Choose 2 (Redirect). This is best practice as it ensures all traffic uses the secure HTTPS connection.
Certbot will then automatically:
Obtain the SSL certificates from Let's Encrypt.
Modify your
/etc/nginx/sites-available/mywebsite
file to include HTTPS listeners (port 443), redirect HTTP traffic (port 80) to HTTPS, and set up the correctssl_certificate
andssl_certificate_key
paths.Configure automatic certificate renewal. Let's Encrypt certificates are valid for 90 days, and Certbot sets up a cron job or systemd timer to renew them automatically before they expire.
After this, your website should be accessible via HTTPS! Try accessing https://your_domain.com
in your browser.
9. Fix the "Welcome to nginx!" Message or Incorrect Site Loading
This typically happens if the default Nginx configuration file is taking precedence over your custom site configuration. Nginx serves the "Welcome to Nginx!" page from its default site.
9.1 Inspect sites-enabled
First, let's see which Nginx site configurations are actually active:
ls -l /etc/nginx/sites-enabled/
You will likely see something like this:
default -> /etc/nginx/sites-available/default
mywebsite -> /etc/nginx/sites-available/mywebsite
If both default
and mywebsite
are symlinked here, Nginx has rules for determining which server
block to use. If your custom server_name
isn't an exact match, or if the default_server
directive is used in the default config, it can pick the wrong one.
9.2 Disable the Default Nginx Site (Recommended Approach)
This is the cleanest way to ensure your custom site configuration takes precedence.
sudo unlink /etc/nginx/sites-enabled/default
Explanation:
unlink
: Removes a symbolic link. This command effectively disables the default Nginx site without deleting the original configuration file (/etc/nginx/sites-available/default
).
9.3 Re-verify and Correct Your mywebsite
Nginx Configuration
It's crucial that your mywebsite
configuration (especially the server_name
directive) is perfectly accurate for your domain(s).
sudo nano /etc/nginx/sites-available/mywebsite
Ensure the server_name
directives are exact matches for your domain(s). After Certbot runs, your file should look something like this, with both HTTP and HTTPS blocks:
# /etc/nginx/sites-available/mywebsite
# HTTP block - will be redirected by Certbot to HTTPS
server {
listen 80;
listen [::]:80;
server_name mywebsite.id www.mywebsite.id; # <-- MUST MATCH YOUR DOMAIN EXACTLY
# Certbot will inject the redirect here after you run it for this domain
# And also the location block for .well-known/acme-challenge
# For example, it adds:
# return 301 https://$host$request_uri;
# (Other Certbot-specific configurations will also be here)
}
# HTTPS block - Certbot creates/modifies this
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mywebsite.id www.mywebsite.id; # <-- MUST MATCH YOUR DOMAIN EXACTLY
ssl_certificate /etc/letsencrypt/live/mywebsite.id/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebsite.id/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/letsencrypt/ssl-dhparams.conf;
location / {
proxy_pass http://localhost:8080; # Points to your mywebsite.web Docker container
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; # Crucial for ASP.NET Core apps behind a proxy
proxy_cache_bypass $http_upgrade;
}
}
Make sure you have both the http://
and https://
server blocks for your domain. Certbot creates the HTTPS one and modifies the HTTP one to redirect.
9.4 Test Nginx Configuration Again
sudo nginx -t
This command checks for syntax errors in all your Nginx configuration files. If there are any, it will tell you where they are. Fix them before proceeding.
9.5 Reload Nginx
After making changes to Nginx configuration files, use reload
to apply them without interrupting service for other domains (if you have any). If reload
doesn't work, restart
is a more forceful option.
sudo systemctl reload nginx
After These Steps:
Clear your browser cache: Sometimes browsers cache redirects or old content. A hard refresh (
Ctrl+F5
orCmd+Shift+R
) or clearing your browser's cache can help ensure you're seeing the latest content from your server.Try accessing your website again. It should now load correctly via HTTPS.
Conclusion:
You have successfully deployed your Dockerized web application to a Linux VPS! By following these steps, you've established a secure, organized, and scalable environment for your application. This setup leverages Docker for containerization, Nginx for efficient reverse proxying and load balancing, and Let's Encrypt for essential SSL security, all adhering to industry best practices for robust deployment.
Subscribe to my newsletter
Read articles from Kristiadhy directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Kristiadhy
Kristiadhy
Experienced Full Stack .NET developer with a proven track record of designing and implementing robust business applications. Proficient in using ASP.NET Core Web API, Blazor, and WinForms to deliver high-quality, efficient code and scalable solutions. Strong focus on implementing industry best practices for cleaner and scalable outcomes.