Complete Guide to Understanding Docker


Docker solve the classic it problem 'It run's on my machine but not on client's machine' Problem- If you develop any web-application on window's machine let's take an example of Node then it requires Node module it is basically dependency for that project, So this dependency is install and configured according to window's machine and when you transfer the file to any mac user then it may create an issue to run the application. How docker solve? , on high leave docker basically gives you an environment and on that environment you have to setup and run your application and when you want to share you share your application with the whole environment so that end user have no need to setup any thing that application runs on the client's machine same as yours machine. Don't worry i have not use any technical terms like images, container we will understand it later at this time i am giving you the high level overview.
๐ธ Q: Why do we use Docker?
A: To eliminate the "It works on my machine" problem by packaging applications with their complete environment to ensure consistent behavior across systems.
๐ธ Q: What if we give the whole zip file of application?
A: Including project code dependencies, OS-level libraries, runtime (Node, Python, etc.), port configuration, environment variables all can be different according to OS.
It is develop by dot cloud for in house problem but later they publicly available in py con conference and then it goes to CNCF (2017) then it became very popular. now it is an open source available for all
Docker Is Like VM(Virtual Machine)?
No docker is not an virtualization tool it is an containerization tool, Now let's understand the difference between virtualization V/s containerization
Feature | Virtual Machine (VM) | Docker (Container) |
What it creates | A full computer inside your computer (with its own OS) | A small isolated space just for your app |
Startup Time | Slow (takes minutes because it boots an entire OS) | Fast (starts in seconds, no full OS boot) |
Storage Usage | Heavy (needs GBs of space for OS and app) | Light (only needs app and its dependencies, usually in MBs) |
Resource Usage | Uses a lot of CPU, RAM, and disk (because it runs a full OS) | Uses less CPU and RAM (shares the main system's OS) |
Isolation | Strong isolation (like separate computers) | App-level isolation (shares OS but keeps apps separate) |
OS Required Inside | Yes, every VM has its own full OS (like Windows, Linux) | No extra OS needed; uses base image and system kernel |
Example | VirtualBox, VMware, Hyper-V | Docker, Podman |
Real-life Example | Renting a full apartment (your own kitchen, bathroom, etc.) | Renting a PG room (just your room, but sharing kitchen and bathroom) |
๐ธ Q: How is Docker different from a Virtual Machine?
A: Virtual Machines emulate hardware and run full OS for each app, while Docker uses the host OS kernel and provides lightweight containers that are faster and more efficient.Q: Can you run Docker on Windows directly?
A: No. Docker needs a Linux kernel, so on Windows it runs through WSL2 or Hyper-V, which is managed by Docker Desktop. Also many people confuse how we run linux container on windows machine if docker share the host OS, so the answer is same is uses hyper-v or wls to do that.
Docker Architecture
Docker CLI : It is use to interact with the Docker Demon
Docker Daemon: Dockerd->containerd , under the hood docker use the containerd which is use to create and manage the containers.
Docker Engine: It is a full runtime that powers the Docker , CLI+dockerd+containerd.
Docker Client/Desktop: GUI for docker, Uses API to directly interact with docker engine
Installation
Windows: search for the docker desktop website and then install docker desktop Link:-(https://www.docker.com/products/docker-desktop/)
Linux: From Ubantu's APT repo:
apt update
sudo apt-get install docker.io
This command will install the docker in to your linux system, You can verify by running the below command
docker --version
Output:
root@18e64990cafd:/# docker --version
Docker version 27.5.1, build 27.5.1-0ubuntu3~24.04.2
root@18e64990cafd:/#
It install the docker from ubuntu APT repo and it get very slow update so you don't get the latest version, like above it installed version 27.5.1 but when you google for the latest version you get "The latest stable release of Docker Engine is ==28.2.2==, released on May 30, 2025. Docker Desktop, which includes the Docker Engine, is on version 4.41, released on April 29, 2025." For the latest version version you have to install from Docker repo , Let see how we do that.
If you install from ubantu's repo and want to uninstall then use this command
sudo apt-get remove --purge docker.io -y
From Docker Repo: You can go through the linux doc for docker installation from multiple steps but we follow the installation from the script file , You can also find the script command from docker installation doc
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
now yo can verify
docker --version
Output:
thenameisshivam@1sk:~$ docker --version
Docker version 28.2.2, build e6534b4
thenameisshivam@1sk:~$
Now you clearly saw that it install the latest version of docker currently available on docker official repo.
Check the ststus:
systemctl status docker
output:
thenameisshivam@1sk:~$ systemctl status docker
โ docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (==running==) since Wed 2025-06-18 08:37:39 UTC; 3min 2s ago
TriggeredBy: โ docker.socket
Docs: https://docs.docker.com
Main PID: 207416 (dockerd)
Tasks: 8
Memory: 100.2M
CPU: 377ms
CGroup: /system.slice/docker.service
โโ207416 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
You can clearly saw that the status is running
use this command to start and stop the container
systemctl stop docker ## stop docker
systemctl start docker ## start docker
Command to see the running containers
docker ps
you may see the permission denied, So we have to add the current user to docker group
sudo usermod -aG docker $USER ## add current user to docker group
newgrp docker ## after then when you do docker ps yo saw the output
sudo docker ps ## run command as root user (Not recommended)
Output:
thenameisshivam@1sk:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
thenameisshivam@1sk:~$
Currently no any container is running.
โ Q3. What is the difference between Docker CE and Docker.io? A:- docker.io
: Version available via Ubuntu APT repo (usually outdated).
docker-ce
: Docker Community Edition from Dockerโs official repo (latest version).
โ Q4. What is containerd
? A: containerd
is a container runtime used internally by Docker to create, start, stop, and manage containers. Docker daemon (dockerd) interacts with it.
โ Q5. What does usermod -aG docker $USER
do? A: It adds the current user to the docker
group so you can run docker
commands without using sudo
.
Docker Images
Docker image is an blueprint from which docker container is created , It contains all the necessary steps to run your code it also contain your whole source code and configuration, environment setup. We basically write the docker file(dockerfile is an instruction to build an image) and from docker file we create the docker image and form the help of docker image we build and run the container. dockerfile -> Docker Image -> Docker Container It is not necessary to write docker file to create images all the time , there are many prebuilt images are available from which we can run the container easily.
To see the available docker image
docker images
docker image ls ## both command will show you the avilable images
Now we already know that docker image is build using docker file , Now let's understand how to write docker file .
in your root directory create file name : dockerfile
myapp/
โโโ index.js
โโโ package.json
โโโ dockerfile
Dockerfile
now let see and understand the basic docker file for simple node application:
# 1. Base Image (setting the environment (node is necessaryto run node application ) It get node v-20 image from docker hub )
FROM node:20
# 2. App Directory (specify the working directory under the container)
WORKDIR /app
# 3. Copy package.json and install deps (copy the package.json and lock.json from your path to ./ means current working directory in our case it is /app)
COPY package*.json ./
RUN npm install
# npm ci = for production (it install dependency according package.lock.json)
# 4. Copy the source code (now copy the whole sourcecode from current path to the working directory of container in our case it is /app )
COPY . .
# 5. App runs on port 3000 (exposing the port on which our node server is running) only for document purpose no any technical usecase
EXPOSE 3000
# 6. Start the app (to run the node app it run the npm start command)
CMD ["npm", "start"]
To build the docker image from docker file follow this command:
docker build -t my-node-app .
# build is use to creat docker image from docker file it find the docker file in the current root foulder and the file name is like : dockerfile , -t is use to give the tage name for your image to find it easily when you have too many images.
To run the container from your image follow this command:
docker run -p 3000:3000 my-node-app
# run command is use to run the container from image and -p flag is use for port binding (it bind port 3000 of your machine to port 3000 of container) then we give the image name from which we want to run the container
Now open browser: http://localhost:3000
you will see your backend is up and running.
Q: What is Dockerfile? A: A text file with instructions to build a Docker image โ includes base image, dependencies, source code, and commands to run the app.
Q: What is difference between CMD and RUN in Dockerfile? A:
RUN
executes during image build (e.g. installing dependencies).CMD
runs when a container is started (e.g. start the app).Q: What does EXPOSE do in Dockerfile? A: Tells Docker which port the app listens to โ itโs informational, actual mapping is done using
-p
duringdocker run
.Q: What is the role of
WORKDIR
in Dockerfile?
A: It sets the working directory in the container for all subsequent instructions likeCOPY
,RUN
, etc. If the directory doesnโt exist, Docker creates it.
Caching data and layering
Now let's understand the advance concept of docker layering and caching , If you build you image see the time taken to build the image:
-> First build
[+] Building 52.1s (10/10) FINISHED docker:desktop-linux
=> [internal] load build definition from dockerfile 0.1s
=> => transferring dockerfile: 162B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 5.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM docker.io/library/node:alpine@sha256:7aaba6b13a55a1d78411a1162c1994428ed039c6bbef7b1d98 26.4s
=> => resolve docker.io/library/node:alpine@sha256:7aaba6b13a55a1d78411a1162c1994428ed039c6bbef7b1d985 0.1s
sha256:adbc84a1bb5fe6ad644e579e8abf52bcbe18ebeba3a6fef35c7e3edcc6974f80 0.0s
=> [internal] load build context 9.2s
=> => transferring context: 23.83MB 9.2s
=> [2/5] WORKDIR /app 0.5s
=> [3/5] COPY package*.json ./ 0.1s
=> [4/5] RUN npm install 18.0s
=> [5/5] COPY . . 1.0s
=> exporting to image 0.6s
=> => exporting layers 0.5s
=> => writing image sha256:25e298ba4eaaeece0bfbadd756d4a9b48591e6cf4887be14465a4f3a3ba5e1c5 0.0s
=> => naming to docker.io/library/devsync
-> Second build
[+] Building 4.9s (10/10) FINISHED docker:desktop-linux
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 162B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 4.3s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM docker.io/library/node:alpine@sha256:7aaba6b13a55a1d78411a1162c1994428ed039c6bbef7b1d985 0.0s
=> [internal] load build context 0.5s
=> => transferring context: 230.40kB 0.4s
=> CACHED [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY package*.json ./ 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> CACHED [5/5] COPY . . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:25e298ba4eaaeece0bfbadd756d4a9b48591e6cf4887be14465a4f3a3ba5e1c5 0.0s
=> => naming to docker.io/library/devsync1
If you compare both terminal output then you saw that in first time in step [1/5] it takes 26 s but in second build it takes 0 s, and you can also notice that CACHED term in all steps and all take 0s. We do not change any thing in our code so it use the cached items. let see what if we change some line of code.
-> build after some code changes
[+] Building 4.9s (10/10) FINISHED docker:desktop-linux
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 162B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 4.3s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM docker.io/library/node:alpine@sha256:7aaba6b13a55a1d78411a1162c1994428ed039c6bbef7b1d985 0.0s
=> [internal] load build context 0.5s
=> => transferring context: 230.40kB 0.4s
=> CACHED [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY package*.json ./ 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> [5/5] COPY . . 1.7s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:25e298ba4eaaeece0bfbadd756d4a9b48591e6cf4887be14465a4f3a3ba5e1c5 0.0s
=> => naming to docker.io/library/devsync1
Now you can clearly see that we change in the code and then rebuild the image then it uses the cache item for step 1,2,3,4 but in step five it copy whole source code again because we have do some changes in our code. we did not install any packages so it use cached item at step 3 and 5
Now let's test again at this time we do not change any thing in our code but we install new package.
-> build after adding new dependency
[+] Building 4.9s (10/10) FINISHED docker:desktop-linux
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 162B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 4.3s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM docker.io/library/node:alpine@sha256:7aaba6b13a55a1d78411a1162c1994428ed039c6bbef7b1d985 0.0s
=> [internal] load build context 0.5s
=> => transferring context: 230.40kB 0.4s
=> CACHED [2/5] WORKDIR /app 0.0s
=> [3/5] COPY package*.json ./ 15.5s
=> [4/5] RUN npm install 1.1s
=> [5/5] COPY . . 0.6s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:25e298ba4eaaeece0bfbadd756d4a9b48591e6cf4887be14465a4f3a3ba5e1c5 0.0s
=> => naming to docker.io/library/devsync1
now you can clearly see that at this time only step 2 is cached and step 3,4,5 all are not used from cached ,But wait why step 5 runs again we do not do any changes in the code we only add new dependency so technically it should build step 3,4 from scratch and it has to use step 5 from cached. This behavior is because of layers system which docker use to build the image ,It divide all steps in to a layer and if any layer have some changes in original file then it doesn't use cached item from that layer to all the layer present below at that layer
Conclusion: It is very important how you write the docker file, order is important to create optimize the build time.
Q: How does Docker caching work?
A: Docker caches each layer of the Dockerfile. If the content of a layer changes, Docker invalidates that layer and all layers after it.
Q: Why is the order of instructions important in Dockerfile?
A: Because Docker uses layered caching. If you placeCOPY . .
before installing dependencies, any small change in code will invalidate dependency cache, making builds slower.
Q: What happens when you change only the
package.json
?
A: Docker will rebuild from theCOPY package*.json
step, which will also re-runRUN npm install
, and everything after that will also be rebuilt.
.dockerignore
Docker ignore file is very important for optimization purpose , It is same is .gitignore when we push something on to the github then it not consider the file or folders mention in the .gitignore file same like that at the time of building image it dose not considered the file or folders mention in the .dockerignore file. In our previous dockerfile we copy all file from rood to /app in docker container so it overwrite the node module which we install before cope and it may create an issue related binaries.
Q: Why should we use
.dockerignore
?
A: To prevent copying unnecessary or platform-specific files (likenode_modules
,.env
,.git
) into the Docker image. This improves build speed, reduces image size, and prevents binary incompatibility issues. Q: Can.dockerignore
affect image correctness?
A: Yes, for example if you donโt ignorenode_modules
and copy host-installed modules into a Linux container, native modules may break.
Important Commands related containers and images
๐น Image Commands
Command | Description |
docker build -t <name> | Build Docker image using Dockerfile |
docker images / docker image ls | List all images |
docker rmi <image_id> | Remove image |
docker pull <image> | Pull image from Docker Hub |
docker push <image> | Push image to Docker Hub |
๐น Container Commands
Command | Description |
docker run <image> | Create + start new container |
docker run -d <image> | Run container in background (detached) |
docker run -it <image> | Run container interactively (terminal access) |
docker run -p 3000:3000 <image> | Map host port to container port |
docker run --name <name> <image> | Assign name to container |
docker start <container_id/name> | Start existing (stopped) container |
docker stop <container_id/name> | Stop running container |
docker restart <container_id/name> | Restart container |
docker rm <container_id> | Remove container |
docker rm $(docker ps -aq) | Remove all containers |
docker exec -it <container> bash | Access terminal of running container |
docker logs <container> | Show container logs |
docker ps | List running containers |
docker ps -a | List all containers (running + stopped) |
docker ps -aq | List all the container id |
๐น Flags & What They Mean
Flag | Meaning |
-d | Detached mode (run in background) |
-p <host>:<container> | Port mapping |
-it | Interactive terminal |
--name <name> | Give container a custom name |
-v <host>:<container> | Mount volume (host โ container) |
--rm | Remove container after it exits |
--env VAR=value | Pass environment variables |
--network | Specify network |
--build-arg | Pass argument while building image |
๐น Other Handy Commands
Command | Description |
docker inspect <id> | Detailed config of container/image |
docker logs -f <name> | Live logs (follow) |
docker system prune -a | Remove unused data (โ ๏ธ careful!) |
docker compose up -d | Start multi-container app (via docker-compose.yml ) |
docker compose down | Stop and remove containers, networks, volumes |
Docker Network
We already know docker run's container in an isolated environment It means if we run 2 or more container they can not interacts with each other but if we want to connect containers with each other then we use docker network, We run the containers on the same network so that they can interact with each other. There are 7 type of docker network but we mainly use only 4 type
Bridge(Default)> This is the default network created by Docker. Containers on this network can talk via IP but cannot resolve each other by name. Use user-defined bridge for name-based communication.
Host: Container shares host's network. No isolation. Host's port = Container's port
user-define bridge: Network define by the user who can control which container can talk to which container. Cleaner DNS and control.
None: Container has no network at all. Totally isolated.
Q: If default bridge network exists, why canโt two containers talk by name?
A: Default bridge does not support DNS name resolution between containers. Only user-defined bridge networks support container name-based DNS resolution.
Command to see the docker network:
thenameisshivam@1sk:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
82ab39248a41 bridge bridge local
17bacdfbc84d host host local
1053c5323c66 none null local
Command to create user-define bridge:
thenameisshivam@1sk:~$ docker network create mynetwork -d bridge
43ecdd8763e15a31987ccb1984e6f67520a5d64eb670a98adb3d06e7d6117a2b
thenameisshivam@1sk:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
82ab39248a41 bridge bridge local
17bacdfbc84d host host local
43ecdd8763e1 mynetwork bridge local
1053c5323c66 none null local
Above we already learn how to create image from dockerfile of nodejs application but you may get error if you uses mongodb in your application because that container runs in isolated environment and we do not have any database present there. so let's create a new bridge network then start mongo container then start your node container with the same network
Create new bridge network
PS C:\Users\sap74\Desktop\DevSync> docker network create two-tier -d bridge
986e2297784e54eba3aff3e987d134b0350e6c7599ae54a0eb317514cedfca25
PS C:\Users\sap74\Desktop\DevSync> docker network ls
NETWORK ID NAME DRIVER SCOPE
9f91eb336205 bridge bridge local
cbe66919f8aa host host local
4f43900bee13 none null local
986e2297784e two-tier bridge local
PS C:\Users\sap74\Desktop\DevSync>
Run mongo container on same network
PS C:\Users\sap74\Desktop\DevSync> docker run --name mongodb -d --network two-tier mongodb/mongodb-community-server
Unable to find image 'mongodb/mongodb-community-server:latest' locally
latest: Pulling from mongodb/mongodb-community-server
89dc6ea4eae2: Pull complete
4f4fb700ef54: Pull complete
a7c917f3d12a: Pull complete
Digest: sha256:0d5e317cf4593a8e8e23703f83449f557aa6d4c70b18240d429f63f7bed9b1b5
Status: Downloaded newer image for mongodb/mongodb-community-server:latest
3b4e935190ef518f7895a2b65a97a19bfc7b5dafdf6e6d2c286808e340365a89
PS C:\Users\sap74\Desktop\DevSync> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b4e935190ef mongodb/mongodb-community-server "python3 /usr/local/โฆ" 6 seconds ago Up 5 seconds 27017/tcp mongodb
you can see that your mongo container is up and running on two-tier network, for more in depth knowledge yo can go through the documentation (https://www.mongodb.com/resources/products/compatibilities/docker).
Now run the node container on two tier network
S C:\Users\sap74\Desktop\DevSync> docker run -d -p 3000:3000 --name devSync-backend --network two-tier -e MONGO_URI=mongodb://mongodb:27017/DevSync devsync6
# we add -p flag (-p 3000:3000) so we can access from our host machine 3000 port
# using -e we pass the new mongo uri by replacing localhost->mongodb(container name) using -e flag we can pass as many env we want but for all we have to add -e
0ca2ce0a72575bdcb7900e8d80a6ea65300dadb7afc7f26a7279d7ff9ae33c6d
PS C:\Users\sap74\Desktop\DevSync> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0ca2ce0a7257 devsync6 "docker-entrypoint.sโฆ" 8 seconds ago Up 7 seconds 0.0.0.0:3000->3000/tcp devSync-backend
3b4e935190ef mongodb/mongodb-community-server "python3 /usr/local/โฆ" 18 minutes ago Up 18 minutes 27017/tcp mongodb
PS C:\Users\sap74\Desktop\DevSync> docker logs devSync-backend
> devsync@1.0.0 start
> node app.js
Database connected
Server is running on port 3000
PS C:\Users\sap74\Desktop\DevSync>
You can clearly see that we get database connected from logs it means all things are working right. You can also use mongo image which is lightweight and maintained by docker, we are using mongodb-community-server which is maintained by mongodb.inc
output:
PS C:\Users\sap74\Desktop\DevSync> curl http://localhost:3000/
StatusCode : 200
StatusDescription : OK
Content : Healthy Server Response from Server : 3000
RawContent : HTTP/1.1 200 OK
Docker Volume
Need of docker volume? In above example we create and run the container of mongo and node and run on same user_define_bridge network so that they can communicate with each other,But the problem is when we stop, kill or remove the container then the data which is store in the db also get deleted, container is not able to persist the data. To solve this problem we use docker volume ,We basically mount our container volume with the host storage and when we create new container then we use same volume so we get or persist the older data.
Create volume:
# create named volume command
PS C:\Users\sap74\Desktop\DevSync> docker volume create devsyncV
devsyncV
# command to see the all volumes
PS C:\Users\sap74\Desktop\DevSync> docker volume ls
DRIVER VOLUME NAME
local devsyncV
# command to inspect the volume where you can see the mounting point (in our case we are using windows so it uses wsl to provide linux like mounting structure)
PS C:\Users\sap74\Desktop\DevSync> docker inspect devsyncV
[
{
"CreatedAt": "2025-06-19T10:40:50Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/devsyncV/_data",
"Name": "devsyncV",
"Options": null,
"Scope": "local"
}
]
Run mongodb using volume
# command to run the mongo container with volume
PS C:\Users\sap74\Desktop\DevSync>docker run -d --name mongodb -v devsyncV:/data/db --network two-tier mongodb/mongodb-community-server
# -v source:target (volumeName:mount with /data/db inside container)
aee1f4078c5ac966822f72cb9ff1b32440e45b748f77c495b76b5dbcea0f774e
# restart backend container because may be our server crashed because we kill our previous container to run new mongo container with attach volume.
PS C:\Users\sap74\Desktop\DevSync> docker restart devSync-backend
devSync-backend
# check the logs
PS C:\Users\sap74\Desktop\DevSync> docker logs devSync-backend
> devsync@1.0.0 start
> node app.js
Database connected
Server is running on port 3000
npm error path /app
npm error command failed
npm error signal SIGTERM
npm error command sh -c node app.js
npm error A complete log of this run can be found in: /root/.npm/_logs/2025-06-19T07_47_23_741Z-debug-0.log
> devsync@1.0.0 start
> node app.js
Database connected
Server is running on port 3000
PS C:\Users\sap74\Desktop\DevSync>
# you can clearly see that we get some error after the our database is connected successfylly and at this time our data is also persisted.
- We can attach one volume to multiple containers
Docker Compose
Why Docker Compose?
Running containers one by one manually is boring and repetitive.
docker-compose.yml
is a config file where you define all your services (like MongoDB, Node.js, Redis, etc.) in one place.One command brings up your whole multi-container app, fully connected and ready.
Create docker-compose.yml in root directory where dockerfile and .dockerignore exist
โ๏ธ Basic Docker Compose Syntax
version: "3.8"
services:
<service-name>:
image: <image-name> | build: .
container_name: <custom-name>
ports:
- "<host-port>:<container-port>"
environment:
- VAR=value
volumes:
- <volume-name>:<container-path>
networks:
- <network-name>
depends_on:
- <other-service-name>
volumes:
<volume-name>:
networks:
<network-name>:
Example: Node.js + MongoDB Setup
version: "3.8"
services:
mongodb:
image: mongo:latest
container_name: mongodb
volumes:
- devsyncV:/data/db
networks:
- two-tier
dev-sync-backend:
build:
context: .
container_name: dev-sync-backend
ports:
- "3000:3000"
env_file: .env
environment:
- MONGO_URI=mongodb://mongodb:27017/devsync
networks:
- two-tier
depends_on:
- mongodb
volumes:
devsyncV:
networks:
two-tier:
Explanation :
version
: Optional now, but using3.8
is safe and widely compatible.services
: All container definitions go here.mongodb
service:Uses a named volume
devsyncV
to persist database data even if the container stops.Connected to a custom bridge network
two-tier
, so other containers like backend can talk to it.
dev-sync-backend
service:build.context: .
means Dockerfile in current directory is used.ports
: Maps container's port3000
to local3000
.environment
: UseMONGO_URI
, notMONGO_URL
โ this is a common mistake.depends_on
: Ensures Mongo starts before the backend, but doesn't wait for DB to be "ready".
Command to run
# to run all container
docker compose up
# run in datach mode
docker compose up -d
# to stop all the containers
docker compose down
# output of compose up
[+] Running 3/3
โ dev-sync-backend Built 0.0s
โ Container dev-sync-backend Started 1.8s
โ Container mongodb Started 1.2s
PS C:\Users\sap74\Desktop\DevSync>
- Docker compose will create the new own define bridge network it dose not use your manual created network(very very important).
Production grade dockerfile
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci && npm install -g pm2
COPY . .
EXPOSE 3000
CMD ["pm2-runtime", "app.js"]
Subscribe to my newsletter
Read articles from shivam kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
