Step-by-Step Guide to Deploying a Two-Tier Web App with Docker

I have forked an application developed in Python from my college assignment; feel free to use it. The link to the application is here
- This repo has stable code which runs without any hiccups, for this instance I am starting from scratch
Let’s build the Docker images from a Docker file, We will build the Docker Database image first because our second Docker image for the web container depends on the DB container to start.
Here is the snapshot of Docker file and init file used
To build an image we use docker build with different options or flags
If the image build is successful, you should see something like this while running docker images
- Time to build web/app container now. Imagine DB contianers as building your servers or installing packages in your server to support DB service and Web/Appilcation services
I wonder why I did not use -f flag here:) Docker looks for a file named Dockerfile if the -f flag is not provided. So, if your file is named Dockerfile (the default name), you don’t need to specify -
Running Containers on a customer network
For this instance, I am running Docker containers in a custom network of 10.100.100.0/24
*I did this to have automatic DNS resolution
*Running a container with docker run simplifies the process, but specific requirements often necessitate the use of various flags. Let's start by running the DB container, naming it DB, and setting the root password for MySQL to enable connections.
Some images and applications require environment variables for configuration, eliminating the need for hard-coded values. for exampleHere is an example of running without environment variables
The --name flag assigns a name to the container, while -d runs it in detached mode (in the background). To connect the container to the previously created network (mynetwork), we use the --network flag. The db:latest refers to the image name (db), with latest as the tag specifying the version
Moving forward docker logs <contianer_name/container_id >
will help to debug half of our container startup issues. If you check the output of docker logs command in the picture above we see the error related to the environment variable.
Let's try to start the container with MYSQL_ROOT_PASSWORD. Firstly we need to remove the container before we start the container in the same name.
Our container is up and ready. Now we need a name or IP address to connect to the DB server. Our name of the container is db
Let’s verify if our database has loaded all the DB entries in our mysql.sql file. For this example, we will run a one-time container to just connect to the DB server and pass our SQL query
—rm run a container and remove itself once it stops. Firstly, we will check DNS name resolution since we created this container in the same network as our DB container, we will use apline conainter
Now, we will confirm if the data is loaded in the DB server
Now, let's run our web/app container, lets check our image we created earlier. docker images
lists all images in the system.
Let us run the web-app container using the app docker image.
We will name the container blue and expose port 8080 from the container, meaning the container will listen on this port. Then, we will map port 8080 from the container to port 8081 on the host. This allows access to the container’s service via localhost:8081 on the host, while the container listens on port 8080
After mapping the port and exposing it, we can access the service by browsing to localhost:8081 in our browser
Let us check the DB connection by retrieving from the Database server
Now let’s change the background color to pink as our application supports mutiple color via environment variable. docker run -d --name pink -p 8082:8080 -e DBHOST=db -e DBPORT=3306 -e DBPWD=password -e APP_COLOR=pink --network mynetwork app:latest
Let us see the output in the browser exposed to ports 8082 and 8081 both
Let us see the list of our running container
As we can see, the blue and pink containers are our frontend,s whilethe DB container is our backend.
Let’s check container-to-container communication. We’ll demonstrate this by using the container names, as we’ve custom-created a Docker network. Since this network is isolated from the host machine's network, the host cannot resolve the container names. To simplify, the Docker network is separate from the host network, allowing containers to communicate with each other by name but not directly with the host machine using container names. We will learn how to access interactive terminal to the containers too.docker exec -it <container_name/id> <shell(bash)>
Finally, we learnt how to host a two-tier web application using Docker on a Docker host machine. I will wrap this with the exec command. This will be handy for torubleshooting.
The docker exec command is used to run a command in a running container. It allows you to execute any command within the context of a running container.
Before wrapping us we will also learn how to pass critical information like a password using environment variable from the host machine, we will re-run the lime container .
Finally, we learned how to host a two-tier application and externalize environment variables like DB_PASSWORD, ensuring that sensitive information, such as passwords, is kept separate from the application code. In the next one, we will run a similar configuration using docker-compose.
Don’t forget to check my video demonstration at https://www.youtube.com/watch?v=T5zQEhA272M
Subscribe to my newsletter
Read articles from Saurav Chapagain directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Saurav Chapagain
Saurav Chapagain
am a Cybersecurity and Cloud Support Professional with over 9 years of experience in securing IT infrastructure, managing cloud environments, and ensuring compliance with industry standards. My expertise spans security operations, incident response, vulnerability management, and cloud infrastructure management across AWS and Azure platforms. Throughout my career, I’ve successfully: Reduced unauthorized access attempts by 40% through IAM best practices and security hardening. Improved incident response times by 30% by implementing automated SIEM alert triaging systems. Conducted security audits and vulnerability assessments to ensure compliance with HIPAA, HiTrust, and SOC 2 standards. Managed hybrid cloud environments, optimizing security policies and reducing attack surfaces by 50%. I hold two Post Graduate Certificates in Cloud Architecture & Administration (Seneca Polytechnic) and Cybersecurity (Canadore College), along with certifications such as ISC2 Cybersecurity, CompTIA Security+, and Red Hat Certified System Administrator (RHCSA). My technical skills include expertise in tools like Microsoft Sentinel, Splunk, Palo Alto, and scripting with PowerShell and Bash. I am passionate about leveraging my skills to protect organizations from cyber threats and ensure the integrity of their systems and data. I thrive in collaborative environments, working with cross-functional teams to deliver secure and reliable IT solutions. Let’s connect and discuss how I can contribute to your organization’s IT operation and cybersecurity goals!