2nd week

Table of contents
- CI/CD pipeline
- Docker and DockerHub
- What is Docker?
- What is DockerHub?
- How Docker Works (In Simple Steps)
- Important Docker Commands with Examples
- Why Docker is Important in DevOps
- Sample DockerHub Workflow
- What is a Docker Image?
- What is a Docker Container?
- Real-World Analogy: Movie DVD
- Common Commands
- What is Port Mapping in Docker?
- Why is Port Mapping Important?
- Simple Analogy
- Syntax of Docker Port Mapping
- Real Example with a Node.js App
- What Does EXPOSE Do?
- Mapping to a Different Host Port
- Multiple Port Mappings
- What if You Don't Map a Port?
- Common Use Cases
- What is a Dockerfile?
- Why is a Dockerfile Important?
- Sample Dockerfile for a Node.js App
- Let’s Understand Each Line
- Bonus: Extended Example (React + Node + Mongo)
- Example Build & Run
- docker push — Upload an Image to DockerHub
- docker pull — Download an Image from DockerHub
- What is a Docker Layer?
- Example Dockerfile with Layers
- Key Benefits of Layers
- Optimization Tips Using Layers
- Real Example (Optimized)
- What is a Volume in Docker?
- Docker Networks
CI/CD pipeline
What is CI/CD?
CI/CD stands for Continuous Integration and Continuous Delivery/Deployment. It is a process that automates the steps from writing code to deploying it live on servers.
Why CI/CD is Important?
Automation: No need to manually build, test, or deploy — CI/CD automates it all.
Fewer Bugs: Tests run automatically to catch errors early.
Faster Development: Developers can push features to users faster.
Consistency: The same process every time = fewer mistakes.
How CI/CD Works (With Real-Life Example)
Imagine you're building a To-Do Web App.
Developer Workflow:
Developer writes new feature → pushes code to GitHub.
GitHub triggers the CI pipeline.
CI:
Installs dependencies (
npm install
)Runs tests (
npm test
)
If successful, CD:
Deploys code to staging server for testing.
After approval, deploys to production server.
CI/CD Pipeline Stages Explained
Stage | What Happens | Tool Examples |
Build | App is packaged/bundled. | Webpack, Docker |
Test | Automated tests check if everything works. | Jest, Mocha, Cypress |
Release | App is sent to an environment (e.g., staging). | GitHub Actions, Jenkins |
Deploy | App is deployed to production automatically or after approval. | Vercel, AWS, Netlify |
Sample CI/CD YAML (GitHub Actions Example for Node.js App)
# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [main]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v2
with:
node-version: 18
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
- name: Build App
run: npm run build
- name: Deploy to Staging (optional)
if: github.ref == 'refs/heads/main'
run: echo "Deploying to staging..."
Application Environments
Different environments are used to separate development, testing, and production.
Environment | Purpose | Example |
Development | Local machine, for writing code. | npm run dev |
Testing | Automatically runs tests for every update. | npm test |
Staging | A copy of production for manual testing. | https://staging.todoapp.com |
Production | The real app used by users. | https://todoapp.com |
Sample Environment Config
# .env.development
DATABASE_URL=mongodb://localhost/dev-db
API_URL=http://localhost:3000/api
# .env.staging
DATABASE_URL=mongodb://staging-db-url
API_URL=https://staging.todoapp.com/api
# .env.production
DATABASE_URL=mongodb://prod-db-url
API_URL=https://todoapp.com/api
How CI/CD Improves Your Application
Faster Feedback: Bugs are caught immediately when pushing code.
Confidence in Code: If tests pass, your code is stable.
Safe Deployments: You test everything in staging before going live.
Team Efficiency: Devs can focus on features, not fixing deploy issues.
Example: To-Do App Flow with CI/CD
Let’s say you add a new feature: "Mark as Important" to your React To-Do app.
Process:
Code added and pushed to GitHub.
GitHub Actions runs tests — All pass.
Code is deployed to Staging.
Team tests on staging.
After approval, the same code goes to Production automatically.
Final Summary
CI/CD automates build, test, and deploy processes.
It ensures clean, tested, and reliable releases.
Staging/testing environments catch bugs before real users see them.
Great for both solo developers and large teams.
CI/CD, which stands for Continuous Integration and Continuous Delivery/Deployment, automates the process from coding to server deployment, enhancing development speed and reliability. It reduces manual errors, catches bugs early through automated testing, and ensures consistent, safe deployments by using various environments like development, testing, staging, and production. This approach accelerates feature delivery, boosts team efficiency, and maintains code stability, making it beneficial for both individual developers and large teams.
Docker and DockerHub
What is Docker?
Docker is an open-source platform that lets you build, package, and run applications in a container.
A container is a lightweight, isolated environment that has everything your application needs to run — like code, libraries, settings, and dependencies — all bundled together.
Traditional way: You install Node.js, Python, MongoDB manually on your machine.
Docker way: You just run a container with everything pre-installed — no setup required.
What is DockerHub?
DockerHub is like GitHub, but for Docker images.
GitHub stores code.
DockerHub stores container images (pre-built apps, environments).
On DockerHub you can:
Push your app images to share with others
Pull official images (like
node
,mongo
,nginx
) to run them locally
Docker vs GitHub
Feature | DockerHub | GitHub |
Stores | Docker images | Source code |
Used For | Running apps in containers | Version control of code |
Example | docker pull node | git clone https://... |
CI/CD Role | Provides pre-built app environments | Hosts code and triggers CI/CD |
How Docker Works (In Simple Steps)
Let’s say you build a Node.js To-Do App. Here's what happens:
You write a
Dockerfile
— this describes how to set up your app in a container.You run
docker build
— Docker creates an image from your code.You run
docker run
— Docker launches a container from that image.You can push the image to DockerHub and anyone can pull and run it anywhere.
Important Docker Commands with Examples
Here’s a list of commonly used Docker commands and what they do:
Command | Description | Example |
docker --version | Check Docker version | docker --version |
docker build -t image-name . | Create a Docker image from a Dockerfile | docker build -t todo-app . |
docker images | List all Docker images | docker images |
docker run image-name | Run a container from an image | docker run todo-app |
docker ps | See running containers | docker ps |
docker stop container-id | Stop a running container | docker stop abc123 |
docker rm container-id | Remove a container | docker rm abc123 |
docker rmi image-name | Remove an image | docker rmi todo-app |
docker pull image-name | Download an image from DockerHub | docker pull node |
docker push image-name | Upload an image to DockerHub | docker push lavkumar/todo-app |
docker login | Log in to DockerHub | docker login |
docker-compose up | Start multiple containers defined in a file | docker-compose up |
Why Docker is Important in DevOps
In DevOps, automation and consistency are key. Docker plays a huge role:
Real-World DevOps Benefits:
Benefit | Explanation |
✅ Consistency | App runs the same on every machine |
🚀 Faster Deployment | Apps are ready-to-run in seconds |
🔁 Easy Rollbacks | Revert to a previous image if new version fails |
🔒 Isolated Testing | Test in clean environments without affecting your system |
🔁 Automation | Perfect for CI/CD pipelines in GitHub Actions, Jenkins, GitLab CI, etc. |
Sample DockerHub Workflow
1. Build your Docker image:
docker build -t lavkumar/todo-app .
2. Push to DockerHub:
docker login
docker push lavkumar/todo-app
3. Pull and Run from DockerHub (on any machine):
docker pull lavkumar/todo-app
docker run lavkumar/todo-app
What is a Docker Image?
A Docker image is like a blueprint or template for creating containers.
Think of it as:
🧁 A recipe for making cupcakes.
The recipe (Docker image) tells you:
What ingredients to use (code, dependencies)
How to cook them (configuration, environment variables)
What the final cupcake should look like (application behavior)
🔍 Key Properties of Images:
Read-only: You can’t change an image once it’s built.
Portable: You can move or share it via DockerHub or a registry.
Reusable: One image can create many containers.
Example:
You build an image for a Node.js app:
DockerfileCopyEditFROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "app.js"]
Then you build it:
bashCopyEditdocker build -t my-node-app .
This my-node-app
is your Docker image.
What is a Docker Container?
A Docker container is a running instance of a Docker image.
Using our cupcake analogy:
🍰 A container is an actual cupcake made using the recipe (image).
You can run, stop, restart, or delete containers — they are:
Mutable: You can change things while it’s running.
Isolated: Each container runs in its own environment.
Ephemeral: You can stop and destroy containers without affecting the image.
🔍 Key Properties of Containers:
Created from images using
docker run
Can be started and stopped
Have their own filesystem and network
🚀 Example:
You run a container from your image:
bashCopyEditdocker run -p 3000:3000 my-node-app
Now:
The image is
my-node-app
The container is the running version of that image, accessible on port 3000
Real-World Analogy: Movie DVD
🎬 Image = The DVD of a movie
- You can keep the DVD, share it, and make copies.
📺 Container = Watching the movie on your TV
- You press play (run), watch the movie (app), and when you stop it, the DVD is still there.
Common Commands
For Docker Images:
bashCopyEditdocker build -t myapp . # Build an image
docker images # List all images
docker rmi myapp # Delete an image
For Docker Containers:
bashCopyEditdocker run myapp # Run a container from image
docker ps # List running containers
docker stop <container_id> # Stop a container
docker rm <container_id> # Remove a container
What is Port Mapping in Docker?
When you run a Docker container, the application inside the container usually listens on a port (like 3000 or 8080). But your computer (host system) doesn't automatically know about that port.
That’s where port mapping comes in.
It’s a way to link a port on your local machine (host) to a port inside the Docker container.
Why is Port Mapping Important?
Without port mapping:
Your containerized app might be running fine.
But you won’t be able to access it from your browser or tools.
With port mapping:
- You can open your browser and go to
localhost:3000
to access your app running inside Docker.
Simple Analogy
Imagine a shipping container (Docker container) has a door inside labeled "Port 3000".
But unless you map it to a door on your building (your host machine), no one can go inside.
Mapping port 3000 in the container to port 3000 on your computer is like aligning both doors so visitors (like your browser) can enter.
Syntax of Docker Port Mapping
bashCopyEditdocker run -p <host-port>:<container-port> image-name
Example:
bashCopyEditdocker run -p 3000:3000 my-todo-app
This tells Docker:
Open port 3000 on my local machine (host)
Forward all traffic to port 3000 inside the container
Real Example with a Node.js App
Let’s say you have an app that runs on port 3000
in app.js
:
jsCopyEditapp.listen(3000, () => {
console.log('Server running on port 3000');
});
Your Dockerfile might look like:
DockerfileCopyEditFROM node:18
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]
Now build and run:
bashCopyEditdocker build -t my-todo-app .
docker run -p 3000:3000 my-todo-app
Now go to http://localhost:3000
— your app is live!
What Does EXPOSE Do?
In the Dockerfile:
DockerfileCopyEditEXPOSE 3000
It documents which port the container is listening on.
It does not publish the port to the host.
You still need
-p
or--publish
duringdocker run
.
Mapping to a Different Host Port
You can even map to a different port on your machine:
bashCopyEditdocker run -p 8080:3000 my-todo-app
This means:
Your app inside the container runs on port
3000
But on your machine, it’s accessible at
http://localhost:8080
Multiple Port Mappings
You can map multiple ports:
bashCopyEditdocker run -p 8080:80 -p 8443:443 my-nginx-app
This example:
Maps HTTP (port 80) to 8080
Maps HTTPS (port 443) to 8443
What if You Don't Map a Port?
If you don’t use -p
, your app will still run inside the container, but:
You won’t be able to reach it from your browser or API tools.
Only other containers on the same Docker network can talk to it.
Common Use Cases
Use Case | Command | Access URL |
Run a Node app on same port | docker run -p 3000:3000 myapp | localhost:3000 |
Run React app on different port | docker run -p 5000:3000 myreactapp | localhost:5000 |
Run NGINX with HTTPS | docker run -p 443:443 nginx | https://localhost |
What is a Dockerfile?
A Dockerfile is a plain text file that contains a set of instructions used to build a Docker image.
Think of it like a recipe: each line tells Docker what to do — like "install this", "copy that", "run this command".
Why is a Dockerfile Important?
With a Dockerfile, you can:
Automate the setup of your application
Ensure consistency across environments
Create portable containers that work the same everywhere
It removes the “it works on my machine” problem.
Sample Dockerfile for a Node.js App
DockerfileCopyEdit# 1. Use an official base image
FROM node:18
# 2. Set working directory inside the container
WORKDIR /app
# 3. Copy dependency files first
COPY package*.json ./
# 4. Install dependencies
RUN npm install
# 5. Copy rest of the app code
COPY . .
# 6. Tell Docker what port your app listens on
EXPOSE 3000
# 7. Command to start the app
CMD ["node", "app.js"]
Let’s Understand Each Line
🔹 1. FROM node:18
Starts from a base image — here, Node.js version 18
Docker doesn't install Node from scratch; it uses an official pre-built image from DockerHub.
You can use other base images too, like:
python:3.10
ubuntu:22.04
nginx:latest
🔹 2. WORKDIR /app
Sets the working directory inside the container
All the following commands will run in
/app
.If the folder doesn’t exist, Docker creates it.
Like doing cd /app
before each command.
🔹 3. COPY package*.json ./
Copies only
package.json
andpackage-lock.json
to the container
These files contain dependencies.
Copying them separately allows Docker to cache this layer.
Why? If your app code changes but dependencies don’t, Docker won’t re-run npm install
.
🔹 4. RUN npm install
Installs project dependencies inside the container
It runs
npm install
just like you do locally.This is baked into the Docker image.
🔹 5. COPY . .
Copies the rest of your project files into the container
.
= current directory on your machineSecond
.
= current directory inside the container (/app
fromWORKDIR
)
🔹 6. EXPOSE 3000
Tells Docker your app runs on port 3000
This is documentation for people and tools (like Docker Compose).
Does not actually publish the port (you still need
-p 3000:3000
when running).
🔹 7. CMD ["node", "app.js"]
Defines the command to run when the container starts
Docker runs this command when you launch the container.
If you were running Python, it could be:
CMD ["python3", "
main.py
"]
Bonus: Extended Example (React + Node + Mongo)
Here’s a breakdown of another Dockerfile structure for a React frontend:
DockerfileCopyEdit# Build phase
FROM node:18 AS build
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
# Production phase
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
This uses multi-stage builds:
First stage: build the React app
Second stage: serve it with a lightweight NGINX server
Example Build & Run
bashCopyEditdocker build -t my-node-app .
docker run -p 3000:3000 my-node-app
docker push — Upload an Image to DockerHub
Example:
bashCopyEditdocker build -t my-node-app .
docker tag my-node-app lavkumar/my-node-app:1.0
docker push lavkumar/my-node-app:1.0
You can now find it at https://hub.docker.com/r/lavkumar/my-node-app
docker pull — Download an Image from DockerHub
Syntax:
bashCopyEditdocker pull username/image-name:tag
Example:
bashCopyEditdocker pull lavkumar/my-node-app:1.0
This will download the image and you can then run it:
bashCopyEditdocker run -p 3000:3000 lavkumar/my-node-app:1.0
What is a Docker Layer?
Every Docker image is made up of layers.
A layer is a step in the Dockerfile — each instruction like
FROM
,COPY
,RUN
creates a new layer.
These layers are stacked on top of each other to form a final image.
Think of it like this:
Each layer is like a slice in a sandwich.
Docker stacks those layers to build your app image.
Instead of making the entire sandwich from scratch every time, Docker reuses slices (layers) that haven’t changed!
Example Dockerfile with Layers
DockerfileCopyEditFROM node:18 # Layer 1
WORKDIR /app # Layer 2
COPY package.json . # Layer 3
RUN npm install # Layer 4
COPY . . # Layer 5
CMD ["node", "app.js"] # Layer 6
This creates 6 layers.
Key Benefits of Layers
1. Layer Caching = Faster Builds
Docker remembers unchanged layers, so it skips re-building them.
Example:
- If you only change your app code, Docker won’t re-install dependencies (
RUN npm install
), saving time.
2. Storage Efficiency
Common layers between multiple images are shared.
For example:
- If 3 images use
node:18
as base, that layer is downloaded once and reused.
3. Faster Image Downloads
When pulling an image, Docker only downloads the layers your system doesn't already have.
If you already have base image layers, Docker skips downloading them.
4. Immutable Layers
Layers are read-only.
Docker ensures consistency because once a layer is created, it doesn’t change.
Only the top writable layer can be changed while running (called the container layer).
5. Layer Reuse in CI/CD Pipelines
CI/CD tools like GitHub Actions or GitLab CI benefit from layer caching:
Speeds up Docker builds in pipelines
Avoids repeating the same work (like dependency installation)
Optimization Tips Using Layers
To get the most benefit from layers:
Tip | Why |
Place COPY package.json and RUN npm install before copying all code | So Docker reuses layers if code changes but dependencies don’t |
Combine related commands using && | To reduce the number of layers |
Use .dockerignore | Prevent copying unnecessary files that would create new layers |
Real Example (Optimized)
DockerfileCopyEditFROM node:18
WORKDIR /app
# Only copy dependency files
COPY package*.json ./
RUN npm install
# Now copy the rest of the code
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]
This Dockerfile will rebuild only the bottom layers if your code changes but not your dependencies.
What is a Volume in Docker?
Definition
A Docker volume is a special folder outside the container’s filesystem that stores data. It is used to persist data even if the container is stopped, deleted, or rebuilt.
Real-Life Analogy
Think of a container as a temporary hotel room. If you keep your important documents inside, they’ll be thrown away when you check out.
But if you put them in a locker outside the room (volume), you can reuse them later — no matter how many rooms (containers) you use.
❌ Problem Volumes Solve
By default, data inside a container is lost when:
The container is removed
The container crashes
The image is rebuilt
✅ Docker volumes solve this by:
Storing data outside the container
Sharing data between containers
Persisting database files, logs, user uploads, etc.
Use Case Examples
Save user-uploaded files in a web app
Persist database data (e.g. MongoDB, MySQL)
Store logs or configuration files
Share files between multiple containers
Steps to Use Docker Volumes
Step 1: Create a Volume
docker volume create mydata
This creates a volume named mydata
.
Step 2: Use Volume in a Container
docker run -v mydata:/app/data my-image
mydata
: the volume (on host)/app/data
: the path inside the container
Anything the container writes to /app/data
is stored in mydata
on your system.
Example with Node.js
Let’s say your app writes logs to /app/logs
.
docker run -p 3000:3000 -v logs:/app/logs my-node-app
Now even if the container is deleted, the logs remain in logs
volume.
Step 3: List Volumes
docker volume ls
Step 4: Inspect Volume
docker volume inspect logs
Shows you where the volume lives on your machine.
Step 5: Remove Volume (optional)
docker volume rm logs
Only use this if you’re sure you don’t need the data anymore
Example: MongoDB with Volume
bashCopyEditdocker volume create mongo-data
docker run -d -p 27017:27017 -v mongo-data:/data/db mongo
Now your MongoDB database is saved in mongo-data
, even if the container is deleted.
Docker Networks
What is a Docker Network?
A Docker network allows containers to talk to each other securely, either:
Inside the same system (host)
Across multiple hosts (advanced setups)
Think of it like giving containers a private LAN (Local Area Network) to connect with each other.
❌ Problem Docker Networks Solve
Without Docker networks:
Containers can’t easily find or communicate with each other.
You’d have to use complex IP addresses manually.
There’s no isolation, so one container could accidentally expose services to the public.
✅ Docker Network Solves This By:
Feature | Benefit |
🔐 Isolation | Containers only connect if on same network |
🌐 Service Discovery | Containers can talk using names, not IPs |
🔁 Communication | Containers can share data/services privately |
🛡️ Security | Restrict access to only specific containers |
🔥 Real-World Example
You have:
A Node.js app (container A)
A MongoDB database (container B)
They need to talk to each other privately.
Without Docker network:
❌ You’d struggle with IPs and expose ports publicly.
With Docker network:
✅ They talk easily using container names like mongo
and app
.
Types of Docker Networks
Type | Description |
bridge | Default, for communication between containers on the same host |
host | Shares host's network (no isolation) |
none | No networking |
overlay | For multi-host communication (used in Docker Swarm) |
For most projects, you’ll use bridge networks.
🪜 Steps to Use Docker Networks (with Commands)
✅ Step 1: Create a Network
bashCopyEditdocker network create my-network
Creates a custom bridge network named my-network
.
✅ Step 2: Run Containers on the Same Network
Example: Mongo + Node App
bashCopyEditdocker run -d --name mongo-db --network my-network mongo
bashCopyEditdocker run -d --name node-app --network my-network my-node-image
✅ Now node-app
can connect to Mongo using hostname mongo-db
:
jsCopyEditmongoose.connect('mongodb://mongo-db:27017/mydb')
No IP needed — Docker resolves the name automatically.
✅ Step 3: Inspect Network (Optional)
bashCopyEditdocker network inspect my-network
Shows all containers connected to it and IPs.
✅ Step 4: List All Networks
bashCopyEditdocker network ls
✅ Step 5: Connect Existing Container to Network
bashCopyEditdocker network connect my-network my-container
✅ Step 6: Disconnect a Container
bashCopyEditdocker network disconnect my-network my-container
🧠 Why Use a Custom Network Instead of Default?
The default bridge
network doesn’t allow name-based container-to-container communication.
🧪 Containers can only communicate via IP on the default network, not via container names.
Custom network = container name DNS + better security + better organization.
🔄 Docker Compose Makes This Easier
Example docker-compose.yml
:
yamlCopyEditversion: '3'
services:
mongo:
image: mongo
app:
image: my-node-app
depends_on:
- mongo
👉 Here, Docker Compose automatically creates a private network, and the app
service can access Mongo at mongo:27017
.
Node Dev Example (In CLI)
bashCopyEditdocker network create node-net
docker run -d --name mongo-db --network node-net mongo
docker run -d --name node-app --network node-net my-node-app
Inside my-node-app
, use:
jsCopyEditmongoose.connect("mongodb://mongo-db:27017/mydb");
✅ Works with no IPs or public exposure.
Subscribe to my newsletter
Read articles from Lav kushwaha directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
