Docker

Table of contents
- Docker Overview
- Basic Docker Commands
- โถ๏ธ 1. Running Containers
- ๐ 2. Viewing Containers
- โน๏ธ 3. Stopping and Removing Containers
- ๐ผ๏ธ 4. Working with Images
- โ๏ธ 5. Running Commands Inside Containers
- ๐ 6. Attach / Detach Mode
- โ Goal:
- โ Step 1: Create the Python App
- โ Step 2: Create a Dockerfile
- โ Step 3: Build the Docker Image
- โ Step 4: Run the Container (Non-Interactive Mode)
- โ Step 5: Run the Container with -it (Interactive Mode)
- ๐ณ Docker Port Mapping
- ๐ฆ Docker Volume Mapping โ MySQL Example
- Docker Images
- Docker Compose
- ๐ณ Docker Storage
- ๐ File System Structure in Docker
- ๐งฑ Layered Architecture of Docker Images
- ๐ง Understanding the Copy-on-Write (CoW) Mechanism
- ๐ฆ Container Writable Layer
- ๐พ Persisting Data Using Volumes
- ๐๏ธ Volume Mounting (Managed by Docker)
- ๐ Bind Mounting (Using Host Directory)
- ๐ Modern Way: Using --mount Option
- โ๏ธ Storage Drivers โ The Core Behind Docker Storage
- ๐ Docker Networking
- Docker Registry
- โ๏ธ Container Orchestration โ Explained Visually
- ๐ Docker Swarm โ Container Orchestration with Ease
- โธ๏ธ Kubernetes โ The King of Container Orchestration
- ๐งฑ What is Kubernetes?
- ๐ณ Docker vs โธ๏ธ Kubernetes
- ๐ What Can Kubernetes Do?
- ๐ Docker & Kubernetes โ What's the Relationship?
- ๐๏ธ Kubernetes Architecture
- ๐ง Master Node Components (Control Plane)
- ๐ง Worker Node Components
- ๐ ๏ธ kubectl โ The Kubernetes Command-Line Tool
- ๐ฆ Kubernetes Objects Overview
- โ๏ธ Kubernetes = Production-Grade Orchestration

Docker Overview
Why do we need docker ?
Docker is a tool that lets you package your application with everything it needs (code, libraries, settings) into a single unit called a container, so it can run anywhere โ on any computer, server, or cloud โ exactly the same way.
What it can do?
Docker is a tool that lets you run applications in small, lightweight packages called containers, so they work the same on any computer or server.
What are container?
Containers are lightweight, standalone, and executable software packages that include everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. They are isolated from each other and the host system, ensuring that they run consistently across different computing environments. Containers are a core component of Docker, allowing applications to be deployed and run reliably regardless of the underlying infrastructure.
Virtual machine vs Container
๐ Feature | ๐ฅ๏ธ Virtual Machine (VM) | ๐ณ Container |
๐ก What it is | A full computer inside your computer ๐ | A lightweight app package ๐ฆ |
๐ง OS Included? | Has its own full operating system ๐ง | Shares the host OS kernel ๐ฅ |
โ๏ธ Speed | Slower to start ๐ข (minutes) | Very fast ๐ (starts in seconds) |
๐งฑ Size | Large file size ๐ (GBs) | Small and lightweight ๐พ (MBs) |
๐ Portability | Harder to move or copy ๐ | Easy to share or move ๐ |
๐ Isolation | Fully isolated, like separate houses ๐๏ธ | Isolated, but share some things, like rooms in a house ๐ก |
๐ค Deployment | Not ideal for frequent updates ๐ | Great for CI/CD & microservices ๐ |
๐งฐ Resource Usage | Uses more CPU and RAM ๐ฅ | Very efficient ๐ง |
๐ ๏ธ Setup Time | Longer setup โณ | Quick setup โก |
๐ง Example | Windows running inside Linux using VirtualBox ๐ช | A Node.js app running in a Docker container ๐ |
What is Public Docker Registry - dockerhub and how it related to docker?
๐ What is a Public Docker Registry?
A public Docker registry is an online library where developers can store, share, and download Docker containers (called images).
๐ณ What is Docker Hub?
Docker Hub is the official public Docker registry provided by Docker.
Itโs like the Play Store or App Store, but for Docker images!
๐งฉ How Docker Hub is Related to Docker:
๐งฑ Docker | ๐ Docker Hub |
A tool to create and run containers | A website to store and share container images |
You build images using Docker | You upload/download those images from Docker Hub |
You run docker pull or docker push | These commands talk to Docker Hub |
๐ณ Container vs ๐ผ๏ธ Image (with Examples)
๐ Feature | ๐ผ๏ธ Docker Image | ๐ฆ Docker Container |
๐ก What is it? | A blueprint or template for your app ๐ | A running instance of that blueprint ๐ |
๐ฆ Think of it like | A recipe for a cake ๐ฐ | The actual cake made from the recipe ๐ |
โฑ๏ธ State | Static (doesnโt change) ๐ง | Dynamic (runs and can change) ๐ |
๐ Runs or not? | Cannot run on its own ๐ | Can run as a process ๐ข |
๐ง Use case | Used to create containers | Used to run the actual app |
๐ ๏ธ Created with | docker build | docker run |
๐๏ธ Stored in | Docker Hub or local storage ๐ | Runs in your systemโs memory ๐ง |
๐ Changes allowed? | Cannot change directly โ | You can make changes while it runs โ |
๐ฌ Example command | docker pull nginx โ gets image | docker run nginx โ starts the container |
Basic Docker Commands
โถ๏ธ 1. Running Containers
Command | Description |
docker run <image> | Runs a container from an image in foreground mode |
docker run -d <image> | Runs the container in detached/background mode |
docker run ubuntu | Starts Ubuntu container, but exits immediately (no process) |
docker run ubuntu sleep 5 | Runs Ubuntu and executes the command sleep 5 |
docker run -it ubuntu | Runs Ubuntu in interactive mode with terminal access |
docker run -it ubuntu bash | Starts an interactive Ubuntu shell using bash |
๐ 2. Viewing Containers
Command | Description |
docker ps | Lists only running containers |
docker ps -a | Lists all containers (running, stopped, exited) |
โน๏ธ 3. Stopping and Removing Containers
Command | Description |
docker stop <container_id/name> | Stops a running container |
docker rm <container_id/name> | Removes a stopped container permanently |
๐ผ๏ธ 4. Working with Images
Command | Description |
docker images | Lists all downloaded images with their size |
docker pull <image> | Downloads an image from Docker Hub (does not run it) |
docker rmi <image> | Deletes an image (must remove its containers first) |
โ๏ธ 5. Running Commands Inside Containers
Command | Description |
docker exec <container_name> <command> | Runs a command inside a running container |
Example: docker exec distracted_mcclintock cat /etc/hosts | Reads a file from inside the container |
๐ 6. Attach / Detach Mode
Command | Description |
docker run kodekloud/simple-webapp | Runs the container in attached mode (foreground) |
docker run -d kodekloud/simple-webapp | Runs in detached/background mode |
docker attach <container_id> | Reattaches to a running container in foreground mode |
โ Run the command docker version
and look for the version of Client and Server Engine.
โ Run a container with the nginx:1.14-alpine
image and name it webapp
- Run the command
docker run -d --name webapp nginx:1.14-alpine
and check the status of created container bydocker ps
command.
โ Delete all images on the host
Stop and delete all the containers being used by images.
Then run the command to delete all the available images:
docker rmi $(docker images -aq)
โ docker run redis:4.0
here the version (4.0) specified concider as TAG. When we donโt specify for tag, docker consider it as latest. We can find all the versions on https://hub.docker.com/
โ Goal:
We will create a simple Python application that:
Asks the user for their name via standard input (
input()
).Greets them with a welcome message:
Welcome, <name>!
.
We'll then run it using Docker in both non-interactive and interactive (-it) modes to demonstrate the difference.
โ Step 1: Create the Python App
name = input("Enter your name: ")
print(f"Welcome, {name}!")
โ Step 2: Create a Dockerfile
Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY app.py .
CMD ["python", "app.py"]
โ Step 3: Build the Docker Image
docker build -t python-input-demo .
โ Step 4: Run the Container (Non-Interactive Mode)
docker run python-input-demo
Output:
Enter your name:
โก๏ธ It hangs or exits immediately without taking input.
โ
Step 5: Run the Container with -it
(Interactive Mode)
docker run -it python-input-demo
Output:
Enter your name: Arindam
Welcome, Arindam!
โ
Success! Because -it
attaches a pseudo-TTY and lets you interact with the container's STDIN.
๐ณ Docker Port Mapping
๐ง What Is Port Mapping?
Docker containers run in isolated environments. If a container serves content on port 80
, you canโt access it directly from your computer unless you map that port to your host machine.
๐ Port Mapping connects:
<your-host-machine>:<host-port> โ <container>:<container-port>
๐ฆ Example App: Nginx Web Server
Nginx is a lightweight web server that listens on port 80 by default.
โ Option 1: Run Container With Port Mapping
๐น Command:
docker run -d --name mynginx -p 8080:80 nginx
๐น What it does:
Runs Nginx in the background (
-d
)Maps your hostโs port
8080
โ containerโs port80
Container is named
mynginx
๐น Test it:
Open your browser or run:
curl http://localhost:8080
โ Youโll see the default Nginx page.
๐ซ Option 2: Run Container Without Port Mapping
๐น Command:
docker run -d --name mynginx2 nginx
This runs Nginx without mapping its port to the host, so:
You cannot access it via
localhost
You must access it via the containerโs internal IP
๐ How to Access the App via Container IP (No Port Mapping)
Step 1: Find the container IP:
docker exec <container_name_or_id> hostname -i
๐น Output:
172.17.0.3
Thatโs the internal Docker IP.
Step 2: Access the container (only works from the host, not the internet):
curl http://172.17.0.3
โ Youโll see the Nginx welcome page even without port mapping.
๐งจ This wonโt work from another device unless you do port mapping.
๐ฆ Docker Volume Mapping โ MySQL Example
๐ง Why Use Volumes?
By default, when you remove a container, all data stored inside it is lost. Volumes allow you to:
Persist database data
Share files between host and container
Backup and restore easily
๐ด Scenario 1: Running MySQL Without Volume Mapping
docker run --name mysql -e MYSQL_ROOT_PASSWORD=root -d mysql
Starts a MySQL container.
Password is set to
root
.No volume is mounted.
โ Problem:
docker stop mysql
docker rm mysql
Now all data (e.g., tables, users, etc.) is lost forever, because /var/lib/mysql
was inside the container.
โ Scenario 2: Run MySQL With Volume Mapping
๐น Step 1: Run MySQL with a volume:
docker run --name mysql -e MYSQL_ROOT_PASSWORD=root \
-v /opt/datadir:/var/lib/mysql \
-d mysql
-v /opt/datadir:/var/lib/mysql
maps:/opt/datadir
on the hostto
/var/lib/mysql
in the container (where MySQL stores its data)
๐น Step 2: Stop and Remove Container
docker stop mysql
docker rm mysql
๐ซ Container is gone, but...
๐น Step 3: Data is still safe on the host:
ls /opt/datadir
๐ Youโll see MySQLโs database files.
๐น Step 4: Start a New Container with Same Volume
docker run --name mysql2 -e MYSQL_ROOT_PASSWORD=root \
-v /opt/datadir:/var/lib/mysql \
-d mysql
โ
MySQL will reuse the existing data from /opt/datadir
.
๐ ๏ธ Common Docker Inspection & Logging Commands
Command | What It Does |
docker inspect <name> | Shows detailed container info (JSON) |
docker logs <name> | Shows container stdout/stderr logs |
docker logs -f <name> | Follows logs live (real-time tail) |
Docker Images
How to create an image file ?
To create a Docker image file for your Flask web application, follow these steps:
Create a Dockerfile: This file contains the instructions to build the Docker image.
FROM ubuntu RUN apt-get update RUN apt-get install -y python3 python3-pip RUN pip3 install flask flask-mysql COPY . /opt/source-code ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
Build the Docker Image: Use the Dockerfile to build your image and tag it appropriately.
docker build -t arindam/my-custom-app .
Push to Docker Hub: If you want to make the image available publicly, push it to Docker Hub.
docker push arindam/my-custom-app
By following these steps, you will have created a Docker image for your Flask web application, which can be shared and deployed across different environments.
๐งฑ Layered Architecture in Docker (Explained with Dockerfile)
When you build a Docker image, each line in your Dockerfile creates a new layer in the image. These layers are stacked on top of each other to form the final image.
๐ What is a Layer?
A layer is a read-only snapshot of the file system at a certain point during image build.
Docker caches layers and reuses them when possible to speed up builds.
Only the top layer (created when a container is run) is writable.
๐ Example Dockerfile (Layered Breakdown)
DockerfileCopyEditFROM ubuntu:20.04 # Layer 1
RUN apt-get update -y # Layer 2
RUN apt-get install -y python3 pip # Layer 3
COPY . /opt/source-code # Layer 4
WORKDIR /opt/source-code # Layer 5
RUN pip install -r requirements.txt # Layer 6
ENV FLASK_APP=app.py # Layer 7
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"] # Layer 8
docker history <image-name>
shows a layer-by-layer breakdown of how a Docker image was built, including commands used and layer sizes.
๐จ Using Environment Variables in Applications & Docker
๐ง Application Code with Hardcoded Value:
color = "red"
โ Step 1: Replace with Environment Variable
Update your code to:
import os
color = os.environ.get('APP_COLOR', 'red') # Default: red
๐ Step 2: Run with a Custom Value in Terminal
export APP_COLOR=blue
python app.py
โก๏ธ This sets color = "blue"
at runtime!
๐ณ Using ENV Variables in Docker
๐น Run Docker Container with ENV Variable:
docker run -e APP_COLOR=blue simple-webapp-color
๐ Inspect ENV Variable Inside a Running Container:
docker inspect <container_name> | grep APP_COLOR
Or see all environment variables:
docker inspect <container_name> | grep -i env
๐ณ Docker: CMD
vs ENTRYPOINT
โ What's the Difference?
๐น ENTRYPOINT
โ The Main Command
Think of this as the โalways run this commandโ part of your container.
โ
Fixed command
โ
Hard to override (unless you use --entrypoint
)
โ
Great for containers that always run one app (e.g., nginx
, python
, java
)
Example:
ENTRYPOINT ["python"]
CMD ["app.py"]
๐ธ On container start โ
python app.py
๐ธ If you run:
docker run myimage script.py
It runs:
python script.py
๐น CMD
โ The Default Arguments
This is the "default instruction" when no command is passed.
โ
Can be overridden easily at runtime
โ
Works with or without ENTRYPOINT
โ
Good for setting default parameters
Example:
CMD ["echo", "Hello from CMD"]
๐ธ On container start โ
echo Hello from CMD
๐ธ But if you run:
docker run myimage echo "Hi there"
โ
It overrides CMD
!
๐ Combined Usage
ENTRYPOINT ["python"]
CMD ["app.py"]
Docker Compose
๐ณ Setting Up Multi-Service Applications with Docker Compose
When setting up a complex application that runs multiple services, Docker Compose is the better approach. Instead of running several docker run
commands manually, you can define your entire application stack in a single YAML configuration file named docker-compose.yaml
.
With docker-compose
, you:
โ
Define all services and their configurations
โ
Run the complete stack using docker-compose up
โ
Maintain a version-controlled configuration in one file
โ Note: Docker Compose is intended for a single Docker host (not multiple nodes)
๐ณ๏ธ Example: Simple Voting Application
This is a basic voting application stack with the following components:
Component | Description |
vote | A Python web app to vote for Cats or Dogs |
redis | An in-memory database that stores each vote |
worker | A .NET app that processes the votes and updates PostgreSQL |
db (Postgres) | Stores total counts of votes |
result | A Node.js app to display the vote results |
๐งฑ Application Architecture:
User
โ
Vote App (Python) โ Redis
โ
Worker (.NET) โ PostgreSQL
โ
Result App (Node.js)
๐ง Running Services Individually (Using docker run
)
Assuming all images are pre-built and available:
# Run Redis
docker run -d --name=redis redis
# Run PostgreSQL
docker run -d --name=db postgres:9.4
# Run Vote app with Redis link
docker run -d --name=vote -p 5000:80 --link redis:redis voting-app
# Run Result app with DB link
docker run -d --name=result -p 5001:80 --link db:db result-app
# Run Worker with links to both Redis and DB
docker run -d --name=worker --link redis:redis --link db:db worker
๐
--link
creates aliases and adds entries to/etc/hosts
, allowing containers to resolve each other by name.
โ Docker Compose!
Instead of running each container manually, define all of them in a docker-compose.yaml
file:
version: '3'
services:
redis:
image: redis
container_name: redis
db:
image: postgres:9.4
container_name: db
vote:
image: voting-app
container_name: vote
ports:
- "5000:80"
depends_on:
- redis
result:
image: result-app
container_name: result
ports:
- "5001:80"
depends_on:
- db
worker:
image: worker
container_name: worker
depends_on:
- redis
- db
๐ก
depends_on
ensures that services are started in the right order.
๐ Launching the Stack
Once the docker-compose.yaml
is ready, you can bring up the entire stack with:
docker-compose up -d
To stop and remove everything:
docker-compose down
๐ง Docker Engine
Definition: The core part of Docker, allowing building, running, and managing containers.
Architecture:
docker
(CLI) โ> sends commands via REST API โ>dockerd
(daemon)
Components:
Docker Daemon (
dockerd
)Docker Client (
docker
)Docker REST API
Edit Docker Engine Host:
// /etc/docker/daemon.json { "hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"] }
sudo systemctl restart docker
๐ Remote Docker Access
Insecure (no TLS):
docker -H tcp://<REMOTE_IP>:2375 ps
Secure (with TLS):
docker --tlsverify \ --tlscacert=ca.pem \ --tlscert=cert.pem \ --tlskey=key.pem \ -H=tcp://<REMOTE_IP>:2376 ps
๐งฑ Namespaces in Docker
Namespaces isolate resources to create container environments.
Namespace | Isolates | Purpose |
PID | Process IDs | Only see containerโs own processes |
NET | Network interfaces | Independent network stack |
MNT | Mount points | Isolated filesystem |
UTS | Hostname/domain name | Separate hostname |
IPC | Inter-process communication | Separate shared memory etc. |
USER | UID/GID | Maps user IDs |
๐ Used to make containers feel like they are separate systems.
๐งฎ cgroups (Control Groups)
Purpose: Limit, monitor, and isolate resource usage of processes.
Docker uses cgroups to control:
CPU
Memory
Disk I/O
Number of processes (PIDs)
Devices
Example Command:
docker run --memory="256m" --cpus="1" nginx
Monitor container usage:
docker stats <container_id>
cgroup location in Linux:
/sys/fs/cgroup/
๐ณ Docker Storage
๐ File System Structure in Docker
When Docker is installed on a system, it creates a specific folder structure at:
/var/lib/docker
This directory contains several subdirectories:
Folder | Purpose |
aufs/ or overlay2/ | Stores layered file system data used by containers |
containers/ | Stores logs and metadata of individual containers |
volumes/ | Stores data volumes for persistent storage |
image/ | Stores all image-related data |
๐น Important: Docker stores images, containers, volumes, and other metadata in this location by default.
๐งฑ Layered Architecture of Docker Images
Docker builds images using a layered architecture. Each line in the Dockerfile
forms a new layer, which contains only the changes made by that instruction.
โ Example:
FROM ubuntu
โ Base image layerRUN apt-get install
โ Adds system packagesRUN pip install flask
โ Adds Python packagesCOPY . /app
โ Adds source codeENTRYPOINT ["python", "
app.py
"]
โ Defines startup command
๐ Layer Reuse (Cache Efficiency)
When building another application with the same base image and dependencies, Docker reuses the common layers from cache, creating only the new layers (e.g., source code and entrypoint).
โ This approach:
Saves build time
Reduces disk space usage
๐ง Understanding the Copy-on-Write (CoW) Mechanism
Image layers are read-only.
๐ง What happens when a file is modified inside a container?
Docker copies the original file to the containerโs read-write layer.
All future modifications happen on this copied version.
This is called the Copy-on-Write mechanism.
โ Benefits:
Keeps image layer unchanged and reusable
Allows multiple containers to share the same image layer
๐ฆ Container Writable Layer
When a container is created (docker run
), Docker adds a writable layer above the image layers.
This layer stores:
Logs
Temporary files
Any data written by applications
โ This writable layer is:
Ephemeral โ it exists only as long as the container exists.
Deleted when the container is removed.
๐พ Persisting Data Using Volumes
๐น Problem: What if you want to preserve data (e.g., DB records) even after container deletion?
โ Solution: Use Volumes
๐๏ธ Volume Mounting (Managed by Docker)
Step 1: Create a volume
docker volume create data_volume
This creates a directory under:
/var/lib/docker/volumes/data_volume/_data
Step 2: Run container with volume
docker run -v data_volume:/var/lib/mysql mysql
data_volume
is mounted to/var/lib/mysql
inside the container.All DB writes go to this volume on the host.
Data persists even after the container is deleted.
๐ก If you skip creating the volume beforehand:
docker run -v data_volume2:/var/lib/mysql mysql
Docker will auto-create the data_volume2
.
๐ Bind Mounting (Using Host Directory)
If you already have data on the host (e.g., /data/mysql
) and want to use that:
docker run -v /data/mysql:/var/lib/mysql mysql
This maps the host folder /data/mysql
directly into the container.
๐ This is called Bind Mounting.
๐ Modern Way: Using --mount
Option
The -v
flag is considered the old style.
๐งพ Preferred: --mount
docker run \
--mount type=bind,source=/data/mysql,target=/var/lib/mysql \
mysql
Key-Value Options:
Option | Value |
type | bind or volume |
source | Path on the host system |
target | Mount point inside the container |
โ๏ธ Storage Drivers โ The Core Behind Docker Storage
Docker uses Storage Drivers to manage:
Layered image architecture
Copy-on-write functionality
Container writable layer
Common Storage Drivers:
Driver | Notes |
AUFS | Default on Ubuntu systems |
Overlay2 | Modern driver, widely used across Linux distros |
Device Mapper | Good support on CentOS and Fedora |
BTRFS | Advanced features, less commonly used |
ZFS | Enterprise-grade features |
โ Docker auto-selects the best storage driver depending on the host OS.
๐ Docker Networking
๐ Overview
When Docker is installed, it automatically creates 3 default networks:
Network Type | Description |
bridge | Default network for containers |
none | Isolates container from all networks |
host | Shares hostโs network directly |
๐งฐ Basic Networking Commands
Purpose | Command Example |
Run container in default bridge | docker run ubuntu |
Run container in none network | docker run ubuntu --network=none |
Run container in host network | docker run ubuntu --network=host |
๐ ๏ธ Bridge Network (Default)
A private internal network created by Docker on the host.
Containers get an internal IP (typically from
172.17.x.x
range).Containers can communicate with each other using internal IPs.
External access requires port mapping:
docker run -p 8080:5000 webapp
๐ Example:
docker run -d --name web1 nginx
docker run -d --name web2 nginx
# web1 and web2 can talk to each other using 172.17.x.x or by container name
๐ Host Network
The container shares the same network interface as the host.
No network isolation โ directly uses hostโs IP and ports.
Useful for performance or access simplicity (e.g., for servers).
โ Pros:
No need for port mapping (
-p
flag not required).Fast network performance.
โ ๏ธ Cons:
Port conflicts: Canโt run multiple containers on the same port.
Breaks isolation between host and container.
๐ซ None Network
The container is not attached to any network.
No external access.
No communication with other containers.
Best for security and sandboxing.
๐ง Creating a Custom Bridge Network
If you want containers to be grouped or isolated, create a custom internal network:
docker network create \
--driver bridge \
--subnet 182.18.0.0/16 \
custom-isolated-network
๐ฏ Benefits:
Group containers logically.
Provide isolation between groups.
Assign static subnets.
๐ Listing & Inspecting Networks
Task | Command |
List all networks | docker network ls |
Inspect network settings | docker inspect <container_name> |
Check containerโs IP, MAC | See NetworkSettings section |
๐ Container-to-Container Communication
๐งญ Access by IP
You can use a containerโs internal IP like
172.17.0.3
.โ Not reliable: IPs may change after restarts.
๐งฑ Access by Name โ (Best Practice)
Containers can resolve each other by name.
Docker runs a built-in DNS server at
127.0.0.11
inside containers.
๐น Example:
Web app container can connect to MySQL using:
mysql://mysql_container:3306
As long as both are on the same Docker network.
๐งช Under the Hood: How Docker Implements Networking
1. Network Namespaces
Each container runs in its own network namespace.
Provides complete isolation: interfaces, routing tables, etc.
2. Virtual Ethernet (veth) Pairs
Docker connects containers using veth pairs:
One end inside container, other end in bridge.
Acts like a virtual network cable.
3. Bridge Interface
Docker creates a bridge (e.g.,
docker0
) on the host.All container interfaces connect to this bridge.
4. Built-in DNS Server
Docker includes an embedded DNS server to resolve container names.
Internal IP:
127.0.0.11
in containers.Automatically updates when containers join/leave a network.
Docker Registry
๐ฆ What is a Docker Registry?
A Docker Registry is a storage and distribution system for Docker images. It is where Docker images are stored, managed, shared, and retrieved.
๐งพ Key Concepts
Term | Description |
Docker Image | A packaged application including code, libraries, environment settings |
Registry | A system that stores Docker images and makes them available for download |
Repository | A collection of related images (usually different versions of the same app) |
Tag | Label used to identify image versions (e.g., latest , v1.0 , etc.) |
๐ Types of Docker Registries
1. Docker Hub (Public Cloud Registry)
Default registry used by Docker.
Hosted at: https://hub.docker.com
You can pull images directly:
docker pull nginx
Or push your own:
docker push username/myapp
2. Private Registry
You can host your own Docker registry within your organization:
docker run -d -p 5000:5000 --name registry registry:2
Useful for:
Internal development
Restricted access
Faster access inside local network
3. Other Registries
Amazon ECR (Elastic Container Registry)
Google Container Registry (GCR)
GitHub Container Registry (GHCR)
Harbor (open-source enterprise-grade registry)
โ๏ธ Docker Registry vs Repository
Registry is the server that stores images.
Repository is a collection of images (usually different versions of the same app) in the registry.
๐ Example:
nginx
โ a repositorynginx:latest
,nginx:1.21
โ image tags in that repositoryStored at Docker Hub, which is the registry
๐ Common Docker Registry Commands
Action | Command Example |
Pull an image | docker pull ubuntu |
Push an image | docker push myrepo/myimage:tag |
Login to registry | docker login |
Tag local image | docker tag image myrepo/image:tag |
Run private registry | docker run -d -p 5000:5000 registry:2 |
๐ก๏ธ Authentication & Access Control
Docker Hub supports public and private repositories.
Private registries can be secured using:
HTTPS
Authentication mechanisms
Role-based access control
Step | Action | Example Command |
1 | Login to registry | docker login myregistry.com |
2 | Pull image | docker pull nginx |
3 | Tag for private registry | docker tag nginx myregistry.com/myuser/nginx:v1 |
4 | Push to registry | docker push myregistry.com/myuser/nginx:v1 |
5 | View image on registry | curl http://myregistry.com/v2/_catalog |
6 | Pull from registry | docker pull myregistry.com/myuser/nginx:v1 |
7 | Run container from registry | docker run -d -p 8080:80 myregistry.com/myuser/nginx:v1 |
8 | Stop and remove container | docker stop nginx-test && docker rm nginx-test |
9 | Remove unused images | docker image prune -a |
โ๏ธ Container Orchestration โ Explained Visually
๐ณ Docker and Its Limitation
๐น Running a Single Application
You can run a Node.js application with Docker using:
docker run nodejs
โ Simple and fast for local development
โ But... it only runs one instance on one host
โ The Problems with Manual Scaling
Challenge | Description |
Manual scaling | You must run docker run multiple times to scale the app |
Manual monitoring | You must monitor performance and container health yourself |
No fault recovery | If a container fails, you must restart it manually |
Single host dependency | If the Docker host crashes, all containers go down |
๐ Solution: Container Orchestration
A system for managing, scaling, healing, and automating containerized applications across multiple Docker hosts.
๐งฐ What It Does
๐ Automatic deployment of containers
๐ Auto-scaling: scale up when demand increases
๐ Auto down-scaling: scale down when demand drops
โป๏ธ Self-healing: restart failed containers
๐งญ Load balancing: distribute traffic across instances
๐ Multi-host support: containers can run across many nodes
๐ High availability: no single point of failure
๐ฅ๏ธ Typical Setup
[ Orchestration System ]
|
+----------------------+
| Multiple Hosts |
+----------------------+
| Host 1 | Host 2 | ...|
| Nodejs | Nodejs | ...|
+----------------------+
Each host runs Docker and is controlled by the orchestration tool.
๐งช Docker Swarm Example
docker service create --replicas=100 nodejs
๐ฏ Deploys 100 instances of the
nodejs
application๐ก Swarm manages distribution, health, and load balancing
๐ง Popular Container Orchestration Tools
Tool | Highlights |
Docker Swarm | Native Docker tool, simple setup, CLI integrated |
Kubernetes | Most popular, powerful, complex, used at scale |
Nomad | Lightweight, by HashiCorp, easy integration |
OpenShift | Enterprise Kubernetes by Red Hat |
๐ Docker Swarm โ Container Orchestration with Ease
๐ง What is Docker Swarm?
Docker Swarm is Docker's native container orchestration tool that allows you to combine multiple Docker hosts into a single cluster (called a Swarm).
It provides high availability, load balancing, and fault tolerance โ all while keeping Docker's simplicity.
๐งฑ Why Use Swarm?
Without Swarm | With Swarm |
Manual container deployment on each host | One centralized command to deploy services |
No built-in failover | Automatic container recovery |
No automatic load balancing | Built-in service distribution |
Difficult to scale across nodes | Easy scaling with --replicas flag |
๐ง Architecture of Docker Swarm
+----------------------------+
| Swarm Manager |
| (Leader & decision maker) |
+----------------------------+
|
-------------------------
| | |
Worker 1 Worker 2 Worker 3 โ Executes tasks/containers
๐ฏ Roles:
Manager Node:
Initializes and controls the swarm
Accepts service creation commands
Distributes tasks
Worker Nodes:
Execute containers (tasks)
Report status to manager
๐ Step-by-Step: Setting Up Docker Swarm
๐ฅ๏ธ Prerequisites:
- At least 2 or more hosts (VMs or machines) with Docker installed
โ Step 1: Initialize the Swarm (Manager Node)
docker swarm init
๐ Output includes a docker swarm join
command with a token for worker nodes.
โ Step 2: Join Worker Nodes to the Swarm
Run the command from manager output on each worker node:
docker swarm join \
--token <worker-token> \
<manager-ip>:2377
โ Once joined, these nodes become Swarm Nodes.
๐ฆ Deploying Applications with Docker Services
Instead of manually running containers on each host, you can now use Docker Services, which Swarm distributes automatically.
๐ ๏ธ Traditional Method (not ideal):
docker run my-web-server
Must be run manually on each node
No auto-scaling, recovery, or balancing
โ Swarm-Orchestrated Method (recommended):
docker service create --replicas=3 my-web-server
Run from manager node only
Creates 3 replicas of your web server
Distributes replicas across worker nodes
Monitors health & restarts if one fails
๐ก Swarm Service In Action
Feature | Benefit |
--replicas=3 | Define how many instances to run |
Manager node decides | Automatically schedules containers to worker nodes |
Auto-healing | Failed containers restart automatically |
Load balancing | Swarm routes external traffic across all replicas |
๐ Swarm Management Commands
Purpose | Command |
View swarm nodes | docker node ls |
View running services | docker service ls |
Inspect a service | docker service inspect <service> |
Scale a service | docker service scale <svc>=5 |
Remove a service | docker service rm <service> |
Leave the swarm (worker) | docker swarm leave |
Leave the swarm (manager โ force) | docker swarm leave --force |
โธ๏ธ Kubernetes โ The King of Container Orchestration
๐งฑ What is Kubernetes?
Kubernetes (a.k.a. K8s) is an open-source container orchestration platform that automates:
๐ Deployment
โ๏ธ Scaling
โป๏ธ Self-healing
๐ Rolling Updates
Think of it as the "brain" behind managing containers at scale in production environments.
๐ณ Docker vs โธ๏ธ Kubernetes
Docker CLI | Kubernetes CLI (kubectl) |
docker run my-web-server | kubectl run my-web-server --replicas=1000 |
Manual scaling | kubectl scale --replicas=2000 my-web-server |
Manual update | kubectl rolling-update |
Manual rollback | kubectl rolling-update --rollback |
No auto-scaling | Built-in auto-scaling capabilities |
๐ What Can Kubernetes Do?
โ Deploy thousands of app instances with one command
๐ Automatically scale up/down based on load
๐ Rolling upgrades to update versions without downtime
โฉ๏ธ Rollback to previous versions if something goes wrong
๐งช Perform A/B testing (canary deployments)
๐ง Monitor app health and restart failed containers
๐ Spread containers across multiple machines
๐ Docker & Kubernetes โ What's the Relationship?
Kubernetes runs containers, and Docker was the original container runtime used.
But Kubernetes now supports multiple runtimes like:
๐น Docker
๐น containerd
๐น CRI-O
๐น rkt (deprecated)
๐ Kubernetes manages containers; Docker runs them.
๐๏ธ Kubernetes Architecture
โ๏ธ Cluster = Master + Worker Nodes
+----------------------------+
| Master Node |
| (Control Plane Components)|
+----------------------------+
/ | \
/ | \
+------------+ +------------+ +------------+
| Worker Node| | Worker Node| | Worker Node|
+------------+ +------------+ +------------+
| | |
[Docker] [containerd] [CRI-O]
| | |
[Pods/Containers] [Pods/Containers] [Pods/Containers]
๐ง Master Node Components (Control Plane)
Component | Description |
API Server | Exposes the Kubernetes API; all commands from kubectl go through this |
etcd | Distributed key-value store to store all cluster state |
Controller Manager | Watches for changes (e.g., failed pods) and takes action |
Scheduler | Decides which node runs a new pod (based on resources, affinity, etc.) |
Cloud Controller Manager | Integrates Kubernetes with cloud services (optional) |
๐ง Worker Node Components
Component | Description |
kubelet | Agent that runs on each node; takes instructions from the API server |
Container Runtime | Runs the actual containers (e.g., Docker, containerd, CRI-O) |
kube-proxy | Maintains network rules and service discovery inside the cluster |
๐ ๏ธ kubectl
โ The Kubernetes Command-Line Tool
You interact with Kubernetes using kubectl
(Kube Control):
๐ง Examples:
kubectl run my-web-server --image=web:v1 --replicas=1000
kubectl scale deployment my-web-server --replicas=2000
kubectl rolling-update my-web-server --image=web:v2
kubectl rolling-update my-web-server --rollback
kubectl get pods
kubectl describe deployment my-web-server
๐ง It communicates with the API Server, which updates the cluster state via etcd and schedules changes through the scheduler.
๐ฆ Kubernetes Objects Overview
Object | Purpose |
Pod | Smallest unit in Kubernetes (runs 1+ containers) |
Deployment | Manages replica sets, rolling updates |
Service | Exposes pods as a single endpoint (with load balancing) |
ReplicaSet | Ensures desired number of pod replicas |
Namespace | Logical partition within the cluster |
ConfigMap & Secret | Store config and sensitive data |
โ๏ธ Kubernetes = Production-Grade Orchestration
Feature | Available in Kubernetes? โ |
Auto-scaling | โ Horizontal Pod Autoscaler |
Load balancing | โ Internal and External |
Rolling updates | โ Built-in |
Rollbacks | โ Instant |
Health checks | โ Liveness & Readiness Probes |
Self-healing | โ Pod restarts & replacement |
Resource management | โ CPU & memory limits |
Subscribe to my newsletter
Read articles from Arindam Baidya directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
