Everything You Need to Know to Start Using Docker

What is Docker
Consider creating an application that functions flawlessly on your laptop but malfunctions when sent to another person. Perhaps their environment is lacking certain packages, or they are using a different operating system. This is a typical developer headache.
The solution to this issue is Docker.
Fundamentally, Docker is a platform that enables you to bundle your application with all of its requirements-code, libraries, and dependencies-into a container. This container will always operate in the same way, regardless of where it runs.
What is a Container
Consider a container as a small, independent box that contains your application and all of its necessary components. Any server, developer, or cloud provider can receive this package and it will just work.
Docker packages everything an application needs for it to run into a single container including the application code, its dependencies, the packages and libraries it uses, the runtime and environment configuration. This container can be easily shared and it will work just as expected.
Why is Docker so useful
Imagine you're building software with a team. There are multiple members, and the software has several dependencies, each with its own configuration. For the sake of simplicity, let’s say your software depends on three things: Python, PostgreSQL, and Redis.
Now, every team member has to install the correct version of all three dependencies on their local machines and set up the proper configurations. The installation process varies across operating systems, so each developer has to follow a different guide or documentation based on their OS.
Not to mention, installing and configuring these dependencies usually involves multiple steps, which increases the chances of something going wrong. Often, multiple services also need to be running simultaneously on a developer’s machine for development to work properly.
Docker solves many of these problems by providing a container that includes the required service (like Redis), along with its configuration and dependencies. So, you don’t have to install and configure each service manually.
Instead, you simply run a Docker command that pulls the required container and starts it on your local machine inside an isolated environment. The best part? The command is the same, regardless of your operating system or which dependency you’re installing.
For example, if your software depends on five services, you can run the same kind of Docker command for each one:
docker run postgres
docker run redis
# and so on
Difference Between Virtual Machine and Docker
Before Docker became popular, Virtual Machines (VMs) were the standard way to run applications in isolated environments. But while VMs and Docker containers might seem similar at first glance, they are quite different under the hood.
Here’s a simple way to think about it:
A Virtual Machine emulates an entire physical computer, including its operating system (OS).
A Docker container shares the host machine’s operating system but keeps the application and its dependencies isolated.
Let’s break it down:
Feature | Virtual Machine | Docker Container |
Isolation | Full Machine (includes OS) | Process level isolation (shares OS) |
Startup Time | Comparatively Longer | Comparatively Shorter |
Resource Usage | Heavy | Lightweight |
Performance | Slightly Slower (because of hypervisor) | Faster |
Portability | Less portable (Large image sizes) | Highly portable (Small image sizes) |
Management | More complex (needs hypervisor software) | Simpler (Docker CLI/Compose/Kubernetes) |
With a VM, you would install a full operating system (like Ubuntu), then install Python, install PostgreSQL, install your app, configure everything and it would take a lot of time and disk space. With Docker, you just pull small images that have only what you need. Your app is up and running in seconds, using fewer resources.
How Docker works on Linux and Mac
Docker was originally created to run on Linux, because it relies on features of the Linux kernel to isolate applications and manage resources. In Linux, Docker containers can directly share the host's kernel, making them fast and lightweight.
But here’s the catch:
Most popular Docker images (like those for PostgreSQL, Redis, Nginx, etc.) are built for Linux. So what happens when you want to run a Linux-based container on a Windows or Mac machine?
You can’t simply run a Linux container directly on a Windows or Mac OS, because the Linux container expects to communicate with a Linux kernel, not a Windows or Mac kernel.
This is where Docker Desktop comes in.
Docker Desktop solves this problem by running a tiny, lightweight virtual machine (VM) behind the scenes.
On Windows, it uses a LinuxKit-based VM through Hyper-V or WSL2 (Windows Subsystem for Linux 2).
On Mac, it runs a similar lightweight Linux VM using Apple Hypervisor or QEMU under the hood.
When you run a Docker container on Windows or Mac, you’re actually running it inside this hidden Linux VM.
From your point of view as a developer, it feels exactly the same as running Docker natively — but technically, there's a very thin VM translating Linux system calls in the background.
Installing Docker
You can just follow the instructions provided by Docker on its website for installing docker according to your OS and hardware specifications. I am using Linux Mint so I will be installing the docker engine, the cli client, docker-compose plugin. I will just be using docker through the command line and not be using Docker Desktop.
If you are on Windows or Mac, you can just install the Docker Desktop and it will install all the requirements along with it.
What is an Image
Now that we have successfully installed Docker, we can start using it. But first, we need to understand what images are.
Docker images are templates that contain instructions on how to create and run a container for an application. We can create our own Docker images for our own applications, or we can pull images that others have created from public repositories like Docker Hub.
An image contains everything needed to run an application: the application’s source code, the environment configuration, the operating system layer (usually Linux), and all the dependencies required to start the container.
Difference Between Image and Container
You might have noticed that images sound a lot like containers — and that’s because a container is just a running instance of an image. When you pull an image from a repository and run it, Docker creates a container using the instructions provided in the image. You can even create multiple containers from the same image, each with its own isolated data and state.
Feature | Docker Image | Docker Container |
Definition | A snapshot or blueprint of an application and its environment | A running instance of a Docker image |
State | Static (does not change when executed) | Dynamic (can be started, stopped, modified) |
Role | Template used to create containers | Actual running process created from an image |
Persistence | Read-only | Read-write (can store data and logs) |
Usage | Build once and share | Create, run, and manage instances from the image |
Lifecycle | Exists as a file or set of files in a registry | Has a lifecycle: create → run → stop → remove |
Analogy | Like a class in programming | Like an object (instance) created from the class |
For example, if the image is for a JavaScript application, it would already have Node.js and npm installed, because they’re required to run the app. You can also set up environment variables inside the Docker image, create directories in the operating system layer, and install any other packages your app needs. In short, you can configure everything inside the image, so that when you create a container from it, it’s already set up and ready to run your app without any extra steps.
Listing Images and Containers
Now that Docker is installed, we can start running Docker commands from the terminal.
To list all available Docker images, use:
sudo docker images
To list all running containers, use:
sudo docker ps
You can try running these commands in your terminal. If Docker is correctly installed, you should see an output similar to the one below:
mycomputer:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mycomputer:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
mycomputer:~$
As shown above, the results are currently empty. That’s because we haven’t pulled or run any images yet. Don’t worry! We’ll get to that soon.
If you encounter any errors or get command not found, it's likely Docker wasn’t installed properly. In that case, go back and reinstall Docker before proceeding.
Docker Registries and Docker Hub
Docker registries are centralized online repositories that store and manage Docker images.
Think of them like app stores for Docker images, or like GitHub, but specifically for containers.
You can push your own images to these registries, and also pull images that others have created. Registries not only store Docker images, they also:
Provide version control
Support image sharing
Integrate with CI/CD pipelines
Offer access control and security scanning
There are two main types of registries:
Public registries (like Docker Hub): open to everyone
Private registries: hosted and managed by teams or organizations for internal use
Some popular Docker registries include:
Docker Hub (most widely used)
Amazon Elastic Container Registry (ECR)
Azure Container Registry (ACR)
Google Artifact Registry (formerly GCR)
Exploring Docker Hub
If you go to Docker Hub and search for redis
, you'll find many Redis-related Docker images.
At the top of the results, you’ll usually see the official Docker image for Redis, marked with a green badge and labeled Docker Official Image.
Official images are maintained by Docker itself, in collaboration with the creators of the technology (in this case, Redis) and security experts.
They’re carefully reviewed, regularly updated, and follow best practices for security and reliability.
Using official images is a good idea, especially when you're just getting started, because they’re stable and trusted.
Pulling and Running a Docker Image
Now that we know how Docker Hub works, let’s try pulling an image from it and running a container. We’ll use Nginx as an example because it comes with a web UI, which we can easily view in a browser to confirm everything is working.
Step 1: Pull the Nginx Image
Go to Docker Hub and search for nginx
. Click on the official repository for Nginx — you should land on a page showing a list of supported tags (i.e. image versions). There’s also a special latest
tag that always points to the newest version. However, it's generally a good idea to use a specific version to avoid unexpected changes.
For this example, we’ll pull version 1.29.0
:
mycomputer:~$ sudo docker pull nginx:1.29.0
1.29.0: Pulling from library/nginx
3da95a905ed5: Pull complete
6c8e51cf0087: Pull complete
9bbbd7ee45b7: Pull complete
48670a58a68f: Pull complete
ce7132063a56: Pull complete
23e05839d684: Pull complete
ee95256df030: Pull complete
Digest: sha256:c8a44136afa900a94ac7a07c4d333afc749e8808c94c81d29541d84e091fb615
Status: Downloaded newer image for nginx:1.29.0
docker.io/library/nginx:1.29.0
Since Docker Hub is the default registry, we don’t need to explicitly tell Docker where to look.
You can also pull the latest version by omitting the tag:
mycomputer:~$ sudo docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
Digest: sha256:c8a44136afa900a94ac7a07c4d333afc749e8808c94c81d29541d84e091fb615
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
Step 2: View Downloaded Images
To verify the downloaded images, run:
mycomputer:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx 1.29.0 9592f5595f2b 6 days ago 192MB
nginx latest 9592f5595f2b 6 days ago 192MB
Step 3: Run the Image as a Container
Let’s run the Nginx container using the image we just pulled:
mycomputer:~$ sudo docker run nginx:1.29.0
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/07/01 07:54:50 [notice] 1#1: using the "epoll" event method
2025/07/01 07:54:50 [notice] 1#1: nginx/1.29.0
2025/07/01 07:54:50 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14+deb12u1)
2025/07/01 07:54:50 [notice] 1#1: OS: Linux 6.8.0-60-generic
2025/07/01 07:54:50 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2025/07/01 07:54:50 [notice] 1#1: start worker processes
2025/07/01 07:54:50 [notice] 1#1: start worker process 29
2025/07/01 07:54:50 [notice] 1#1: start worker process 30
2025/07/01 07:54:50 [notice] 1#1: start worker process 31
2025/07/01 07:54:50 [notice] 1#1: start worker process 32
2025/07/01 07:54:50 [notice] 1#1: start worker process 33
2025/07/01 07:54:50 [notice] 1#1: start worker process 34
2025/07/01 07:54:50 [notice] 1#1: start worker process 35
2025/07/01 07:54:50 [notice] 1#1: start worker process 36
2025/07/01 07:54:50 [notice] 1#1: start worker process 37
2025/07/01 07:54:50 [notice] 1#1: start worker process 38
2025/07/01 07:54:50 [notice] 1#1: start worker process 39
2025/07/01 07:54:50 [notice] 1#1: start worker process 40
You’ll see logs from Nginx booting up inside the container. This means it’s running in the foreground, and your terminal is now occupied by the container’s process.
To check running containers from another terminal, run:
mycomputer:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d5d340342806 nginx:1.29.0 "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp xenodochial_elbakyan
Docker even gives the container a randomly generated name.
🛑 Step 4: Stop the Container
Since the container is running in the foreground, you can stop it by pressing Ctrl+C
in the same terminal.
Now, run docker ps
again — and you’ll see no running containers.
To run the container in the background (detached mode), use the -d
flag.
mycomputer:~$ sudo docker run -d nginx:1.29.0
1980e386eee23b9b60dbee38b3350bfb583d9a45141ca9158aa8ab8ab21461a3
mycomputer:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1980e386eee2 nginx:1.29.0 "/docker-entrypoint.…" 9 minutes ago Up 9 minutes 80/tcp exciting_hugle
The output this time is just the full id of the running container. If you run docker ps
command again, you will see the running container, this time with different name since we stopped the previous container and started a new one.
Running the container in the background does not show us the container logs like it did before. So, if we want to see the logs from the container, we execute the command docker logs <container_id>
.
mycomputer:~$ sudo docker logs 1980e386eee2
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/07/01 08:08:23 [notice] 1#1: using the "epoll" event method
2025/07/01 08:08:23 [notice] 1#1: nginx/1.29.0
2025/07/01 08:08:23 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14+deb12u1)
2025/07/01 08:08:23 [notice] 1#1: OS: Linux 6.8.0-60-generic
2025/07/01 08:08:23 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2025/07/01 08:08:23 [notice] 1#1: start worker processes
2025/07/01 08:08:23 [notice] 1#1: start worker process 29
2025/07/01 08:08:23 [notice] 1#1: start worker process 30
2025/07/01 08:08:23 [notice] 1#1: start worker process 31
2025/07/01 08:08:23 [notice] 1#1: start worker process 32
2025/07/01 08:08:23 [notice] 1#1: start worker process 33
2025/07/01 08:08:23 [notice] 1#1: start worker process 34
2025/07/01 08:08:23 [notice] 1#1: start worker process 35
2025/07/01 08:08:23 [notice] 1#1: start worker process 36
2025/07/01 08:08:23 [notice] 1#1: start worker process 37
2025/07/01 08:08:23 [notice] 1#1: start worker process 38
2025/07/01 08:08:23 [notice] 1#1: start worker process 39
2025/07/01 08:08:23 [notice] 1#1: start worker process 40
We can also use the docker run
command without actually having the specific image in our local repository. If we do that, it will first search for that specific image in our local repository and when it does not find it, then it automatically pulls the image directly from Docker Hub and runs it. Let’s take an example. Currently, we only have two images of nginx i.e. latest
and 1.29.0
. Let us run the mainline
nginx image without actually puling it.
mycomputer:~$ sudo docker run nginx:mainline
Unable to find image 'nginx:mainline' locally
mainline: Pulling from library/nginx
Digest: sha256:c8a44136afa900a94ac7a07c4d333afc749e8808c94c81d29541d84e091fb615
Status: Downloaded newer image for nginx:mainline
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/07/01 08:27:55 [notice] 1#1: using the "epoll" event method
2025/07/01 08:27:55 [notice] 1#1: nginx/1.29.0
2025/07/01 08:27:55 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14+deb12u1)
2025/07/01 08:27:55 [notice] 1#1: OS: Linux 6.8.0-60-generic
2025/07/01 08:27:55 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2025/07/01 08:27:55 [notice] 1#1: start worker processes
2025/07/01 08:27:55 [notice] 1#1: start worker process 29
2025/07/01 08:27:55 [notice] 1#1: start worker process 30
2025/07/01 08:27:55 [notice] 1#1: start worker process 31
2025/07/01 08:27:55 [notice] 1#1: start worker process 32
2025/07/01 08:27:55 [notice] 1#1: start worker process 33
2025/07/01 08:27:55 [notice] 1#1: start worker process 34
2025/07/01 08:27:55 [notice] 1#1: start worker process 35
2025/07/01 08:27:55 [notice] 1#1: start worker process 36
2025/07/01 08:27:55 [notice] 1#1: start worker process 37
2025/07/01 08:27:55 [notice] 1#1: start worker process 38
2025/07/01 08:27:55 [notice] 1#1: start worker process 39
2025/07/01 08:27:55 [notice] 1#1: start worker process 40
It started the container even though we did not have the mainline
version of the nginx image in our local repository.
Now if we run the docker ps
command, then we can see two containers each running nginx of different version.
mycomputer:~$ sudo docker ps
[sudo] password for ashutosh:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e0ec7def9b79 nginx:mainline "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp beautiful_ellis
1980e386eee2 nginx:1.29.0 "/docker-entrypoint.…" 21 minutes ago Up 21 minutes 80/tcp exciting_hugle
This is how easy it is to run two instance of the same application of different or same versions in docker.
Let us now quit the recent nginx container by pressing Ctrl+C
in the terminal it is running so that we have only the container that is running in the background.
Port Binding
Now that the Nginx container is running, how do we access it? If we run the command docker ps
as shown before, it also shows the port the container is running on. In this case the port is 80
. So, let us try going to the url localhost:80
.
As we can see, we can’t connect to localhost:80
. This is because the container runs on its own isolated network and the container has its own ports. The port 80 that it is showing when running docker ps
is actually the port of the container itself and not of our computer’s local network. To access the site, we first need to specify a specific port in our computer’s local network to access the port 80 of the container.
To do that, let’s first stop the currently running container by executing docker stop <container_id>
.
mycomputer:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1980e386eee2 nginx:1.29.0 "/docker-entrypoint.…" About an hour ago Up About an hour 80/tcp exciting_hugle
mycomputer:~$ sudo docker stop 1980e386eee2
1980e386eee2
mycomputer:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As we can see, there are no more running containers. So, let us now bind a port, let’s say 8123
, to the port 80 of the container. This can be easily done by adding the -p
flag in the docker run
command as shown below.
mycomputer:~$ sudo docker run -d -p 8123:80 nginx:1.29.0
e9abbe74cf007fc125106ee040934e34f8a6c67eb1082a49bce6e25151dcbeab
mycomputer:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9abbe74cf00 nginx:1.29.0 "/docker-entrypoint.…" 43 seconds ago Up 43 seconds 0.0.0.0:8123->80/tcp, [::]:8123->80/tcp busy_black
Now when we do docker ps
, we see a different value under PORTS
i.e. 0.0.0.0:8123->80/tcp, [::]:8123->80/tcp
. So, if you forget which port each container is accessible from in your local computer, you can just view that information using docker ps
.
Now, instead of port 80
, we go to http://localhost:8123/
, then we will see the following page.
Nginx is running.
Start and Stop Containers
The docker run
command creates a new container every time we run it. And when we stop the container using the docker stop <container_id>
command, the container doesn’t just get deleted. It stays there in the stopped state. Since we have already executed the docker run
command a few times, there should be a few containers in the stopped state. We can view all running and stopped containers using the command docker ps -a
.
mycomputer:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9abbe74cf00 nginx:1.29.0 "/docker-entrypoint.…" 14 minutes ago Up 14 minutes 0.0.0.0:8123->80/tcp, [::]:8123->80/tcp busy_black
e0ec7def9b79 nginx:mainline "/docker-entrypoint.…" About an hour ago Exited (0) About an hour ago beautiful_ellis
1980e386eee2 nginx:1.29.0 "/docker-entrypoint.…" 2 hours ago Exited (0) 16 minutes ago exciting_hugle
d5d340342806 nginx:1.29.0 "/docker-entrypoint.…" 2 hours ago Exited (0) 2 hours ago xenodochial_elbakyan
As we can see, there are three containers that have been stopped and one that is running. We can run the docker stop <container_id>
command to stop the running container as well.
mycomputer:~$ sudo docker stop e9abbe74cf00
e9abbe74cf00
mycomputer:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9abbe74cf00 nginx:1.29.0 "/docker-entrypoint.…" 16 minutes ago Exited (0) 1 second ago busy_black
e0ec7def9b79 nginx:mainline "/docker-entrypoint.…" About an hour ago Exited (0) About an hour ago beautiful_ellis
1980e386eee2 nginx:1.29.0 "/docker-entrypoint.…" 2 hours ago Exited (0) 18 minutes ago exciting_hugle
d5d340342806 nginx:1.29.0 "/docker-entrypoint.…" 2 hours ago Exited (0) 2 hours ago xenodochial_elbakyan
Now all the containers have been stopped.
We can also use the docker start <container_id>
command to run the existing stopped container instead of creating a new container every time. And we can use the name of the containers in place of the container ids in any docker command. Let us try running two existing containers using their names.
mycomputer:~$ sudo docker start busy_black exciting_hugle
busy_black
exciting_hugle
mycomputer:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9abbe74cf00 nginx:1.29.0 "/docker-entrypoint.…" 20 minutes ago Up 9 seconds 0.0.0.0:8123->80/tcp, [::]:8123->80/tcp busy_black
e0ec7def9b79 nginx:mainline "/docker-entrypoint.…" About an hour ago Exited (0) About an hour ago beautiful_ellis
1980e386eee2 nginx:1.29.0 "/docker-entrypoint.…" 2 hours ago Up 8 seconds 80/tcp exciting_hugle
d5d340342806 nginx:1.29.0 "/docker-entrypoint.…" 2 hours ago Exited (0) 2 hours ago
As we can see, two of the four existing containers are now running.
By the way, we can also give our own names to the containers when creating them by using the —name
flag in the docker run
command. For example: docker run —name web-app -d -p 8123:80 nginx:1.29.0
.
Deleting Images and Containers
Before proceeding, let us first delete all the existing images and containers. Use the docler stop <container_id>
command to stop all running containers and docker rm <container_id>
to delete all stopped containers.
sudo docker stop e9abbe74cf00 e0ec7def9b79 1980e386eee2 d5d340342806
sudo docker rm e9abbe74cf00 e0ec7def9b79 1980e386eee2 d5d340342806
To delete the images use the following command. The -f
flag is to force the deletion.
sudo docker rmi -f 9592f5595f2b 9592f5595f2b 9592f5595f2b
Creating Our Own Images
When we have finished building our application, we want to create a docker image of that application bundled with all the services and packages that the application uses to make it easier to deploy and maintain. In order to take our deployment ready application code and package it into a docker image, we need to create a Dockerfile
.
A Dockerfile is just a text file containing the definition or instructions of how to create the docker image for our application. Docker can then build an image by reading those instructions.
Let us create a Dockerfile for a simple FastAPI application.
Setting Up a Simple FastAPI Application
The setup is pretty simple.
Create a new folder named
Docker-blog-fastapi
anywhere in your system.Open the folder in Vs Code or any editor of your choice.
In the terminal in Vs Code, execute the following commands:
# The commands are for Linux python3 -m venv venv source venv/bin/activate pip install fastapi[standard] pip freeze -l > requirements.txt
After executing all those commands in sequence, we have a folder structure like the following:
-Docker-blog-fastapi
-venv
-requirements.txt
The requirements.txt
file contains the following in case someone is reading this blog in a future date and wants the exact versions:
annotated-types==0.7.0
anyio==4.9.0
certifi==2025.6.15
click==8.2.1
dnspython==2.7.0
email_validator==2.2.0
fastapi==0.115.14
fastapi-cli==0.0.7
h11==0.16.0
httpcore==1.0.9
httptools==0.6.4
httpx==0.28.1
idna==3.10
Jinja2==3.1.6
markdown-it-py==3.0.0
MarkupSafe==3.0.2
mdurl==0.1.2
pydantic==2.11.7
pydantic_core==2.33.2
Pygments==2.19.2
python-dotenv==1.1.1
python-multipart==0.0.20
PyYAML==6.0.2
rich==14.0.0
rich-toolkit==0.14.8
shellingham==1.5.4
sniffio==1.3.1
starlette==0.46.2
typer==0.16.0
typing-inspection==0.4.1
typing_extensions==4.14.0
uvicorn==0.35.0
uvloop==0.21.0
watchfiles==1.1.0
websockets==15.0.1
Now, create a file named main.py
in the directory Docker-blog-fastapi
. Copy the following python code into the main.py
file.
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello World"}
Then in the terminal, execute the following:
fastapi dev main.py
Now the FastAPI server should be running at http://127.0.0.1:8000/
and you should see something like {"message":"Hello World"}
when you go to that URL.
Now, our simple FastAPI app has been set up.
Creating Dockerfile
In the same root directory, alongside the main.py
file, create another file and name it Dockerfile
. In this file, we are going to write the instructions for how the image should be built for our application. Copy the following into the file.
# Use an official Python runtime as a base image
FROM python:3.12.3-slim
# Set the working directory in the container
WORKDIR /app
# Install any needed packages specified in requirements.txt
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the current directory contents into the container at /app
COPY . .
# Expose the port that FastAPI will run on
EXPOSE 8000
# Run uvicorn to start the FastAPI application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Here is an explanation of each line of the Dockerfile:
Line → FROM python:3.12.3-slim
:
What it does: This sets the base image for your Docker image. Everything we build will be based on this.
Why: We need Python installed to run our FastAPI app, and this image already includes Python 3.12.3.
Line → WORKDIR /app
:
What it does: Changes the current working directory inside the container to
/app
.Why: All subsequent instructions like
COPY
,RUN
, andCMD
will be relative to this path. It’s like runningcd /app
inside the container.
Line → COPY requirements.txt .
:
What it does: Copies the
requirements.txt
file from our host machine’s current directory into the/app
directory inside the container.Why: We need this file in the container to install dependencies before copying all our code (to take advantage of Docker layer caching).
Line → RUN pip install --no-cache-dir -r requirements.txt
:
What it does: Installs the Python dependencies listed in
requirements.txt
.Why: Our FastAPI app depends on packages like
fastapi
anduvicorn
, which must be installed.Why
--no-cache-dir
: Prevents pip from caching downloaded packages, reducing image size.
Line → COPY . .
:
What it does: Copies everything in our current host directory (where the Dockerfile is) into
/app
inside the container.Why: We want the app source code (e.g.
main.py
) and any other files (.env
,static/
,templates/
) to be part of the container.Note: If we use
.dockerignore
, we can prevent unnecessary files (e.g.,.git
,__pycache__
) from being copied.
Line → EXPOSE 8000
:
What it does: This declares that our application will run on port 8000 inside the container.
Why: This is informational only — it does not actually open the port. We must still publish it using
docker run -p 8000:8000
.
Line → CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
:
What it does: This sets the default command to run when a container starts. Here, it runs Uvicorn with our FastAPI app. There can be only one
CMD
command in a Dockerfile.What’s
main:app
?Why
--host 0.0.0.0
?- Tells Uvicorn to listen on all network interfaces inside the container. If you use
127.0.0.1
, it will only be accessible from inside the container, not from our host.
- Tells Uvicorn to listen on all network interfaces inside the container. If you use
That’s it. This is the complete Dockerfile that will build an image for our FastAPI application that we can run as a container.
Building the Image
We now have our Dockerfile that contains instructions for Docker on how to build the image for our FastAPI application. To actually build the image, we need to execute the command docker build -t <app_name>:<tag> <path/to/Dockerfile>
.
Execute the following command in the same directory that contains the Dockerfile:
docker build -t fastapi-app:1.0 .
This will build an image named fastapi-app
with the tag 1.0
. The .
just specifies that the Dockerfile is located in the current directory. After this command finishes executing, run the docker images
command and you will see the image that has just been created with the name fastapi-app
.
mycomputer:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
fastapi-app 1.0 dacbe1fb9b51 5 minutes ago 265MB
Now we can use this image like any other image pulled from Docker Hub. We can also run it as a container.
mycomputer:~$ sudo docker run -d -p 8000:8000 fastapi-app:1.0
e8ae69c67782ebbcdbf3c6a6be0017db051cac6602db05dde9c55fda93647ddb
Now when we go to http://localhost:8000/
, we will see our FastAPI application running.
We can also view the logs in the container of our app:
ashutosh@ashutosh-Lenovo-Legion-5-15ARH05:~$ sudo docker logs e8ae69c67782ebbcdbf3c6a6be0017db051cac6602db05dde9c55fda93647ddb
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: 172.17.0.1:56488 - "GET / HTTP/1.1" 200 OK
INFO: 172.17.0.1:56498 - "GET /favicon.ico HTTP/1.1" 404 Not Found
This is how we create a docker image of our own application.
Finally…
What we’ve covered so far is just the beginning of what Docker can do.
Docker really shines when you start working with multiple containers that can talk to each other. For instance, if your FastAPI application uses a PostgreSQL database, you could run PostgreSQL in its own container and connect it seamlessly with your app — all without installing anything on your host machine.
You can also use volumes to persist data, so it doesn’t vanish when a container stops. And with tools like Docker Compose, you can manage complex, multi-container applications using just a simple YAML file.
There’s a lot more to explore — networking, secrets management, custom networks, health checks, and deploying to production. But now that you’ve got the fundamentals down, you’ll find the rest much easier to pick up.
Subscribe to my newsletter
Read articles from Ashutosh Chapagain directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
