2nd week

Lav kushwahaLav kushwaha
19 min read

Table of contents

CI/CD pipeline

What is CI/CD?

CI/CD stands for Continuous Integration and Continuous Delivery/Deployment. It is a process that automates the steps from writing code to deploying it live on servers.

Why CI/CD is Important?

  1. Automation: No need to manually build, test, or deploy — CI/CD automates it all.

  2. Fewer Bugs: Tests run automatically to catch errors early.

  3. Faster Development: Developers can push features to users faster.

  4. Consistency: The same process every time = fewer mistakes.

How CI/CD Works (With Real-Life Example)

Imagine you're building a To-Do Web App.

Developer Workflow:

  1. Developer writes new feature → pushes code to GitHub.

  2. GitHub triggers the CI pipeline.

  3. CI:

    • Installs dependencies (npm install)

    • Runs tests (npm test)

  4. If successful, CD:

    • Deploys code to staging server for testing.

    • After approval, deploys to production server.

CI/CD Pipeline Stages Explained

StageWhat HappensTool Examples
BuildApp is packaged/bundled.Webpack, Docker
TestAutomated tests check if everything works.Jest, Mocha, Cypress
ReleaseApp is sent to an environment (e.g., staging).GitHub Actions, Jenkins
DeployApp is deployed to production automatically or after approval.Vercel, AWS, Netlify

Sample CI/CD YAML (GitHub Actions Example for Node.js App)

# .github/workflows/ci-cd.yml
name: CI/CD Pipeline

on:
  push:
    branches: [main]

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Setup Node.js
        uses: actions/setup-node@v2
        with:
          node-version: 18

      - name: Install dependencies
        run: npm install

      - name: Run tests
        run: npm test

      - name: Build App
        run: npm run build

      - name: Deploy to Staging (optional)
        if: github.ref == 'refs/heads/main'
        run: echo "Deploying to staging..."

Application Environments

Different environments are used to separate development, testing, and production.

EnvironmentPurposeExample
DevelopmentLocal machine, for writing code.npm run dev
TestingAutomatically runs tests for every update.npm test
StagingA copy of production for manual testing.https://staging.todoapp.com
ProductionThe real app used by users.https://todoapp.com

Sample Environment Config

# .env.development
DATABASE_URL=mongodb://localhost/dev-db
API_URL=http://localhost:3000/api

# .env.staging
DATABASE_URL=mongodb://staging-db-url
API_URL=https://staging.todoapp.com/api

# .env.production
DATABASE_URL=mongodb://prod-db-url
API_URL=https://todoapp.com/api

How CI/CD Improves Your Application

  1. Faster Feedback: Bugs are caught immediately when pushing code.

  2. Confidence in Code: If tests pass, your code is stable.

  3. Safe Deployments: You test everything in staging before going live.

  4. Team Efficiency: Devs can focus on features, not fixing deploy issues.

Example: To-Do App Flow with CI/CD

Let’s say you add a new feature: "Mark as Important" to your React To-Do app.

Process:

  1. Code added and pushed to GitHub.

  2. GitHub Actions runs tests — All pass.

  3. Code is deployed to Staging.

  4. Team tests on staging.

  5. After approval, the same code goes to Production automatically.

Final Summary

  • CI/CD automates build, test, and deploy processes.

  • It ensures clean, tested, and reliable releases.

  • Staging/testing environments catch bugs before real users see them.

  • Great for both solo developers and large teams.

CI/CD, which stands for Continuous Integration and Continuous Delivery/Deployment, automates the process from coding to server deployment, enhancing development speed and reliability. It reduces manual errors, catches bugs early through automated testing, and ensures consistent, safe deployments by using various environments like development, testing, staging, and production. This approach accelerates feature delivery, boosts team efficiency, and maintains code stability, making it beneficial for both individual developers and large teams.


Docker and DockerHub

What is Docker?

Docker is an open-source platform that lets you build, package, and run applications in a container.

A container is a lightweight, isolated environment that has everything your application needs to run — like code, libraries, settings, and dependencies — all bundled together.

  • Traditional way: You install Node.js, Python, MongoDB manually on your machine.

  • Docker way: You just run a container with everything pre-installed — no setup required.

  • What is DockerHub?

DockerHub is like GitHub, but for Docker images.

  • GitHub stores code.

  • DockerHub stores container images (pre-built apps, environments).

On DockerHub you can:

  • Push your app images to share with others

  • Pull official images (like node, mongo, nginx) to run them locally

Docker vs GitHub

FeatureDockerHubGitHub
StoresDocker imagesSource code
Used ForRunning apps in containersVersion control of code
Exampledocker pull nodegit clone https://...
CI/CD RoleProvides pre-built app environmentsHosts code and triggers CI/CD

How Docker Works (In Simple Steps)

Let’s say you build a Node.js To-Do App. Here's what happens:

  1. You write a Dockerfile — this describes how to set up your app in a container.

  2. You run docker build — Docker creates an image from your code.

  3. You run docker run — Docker launches a container from that image.

  4. You can push the image to DockerHub and anyone can pull and run it anywhere.

Important Docker Commands with Examples

Here’s a list of commonly used Docker commands and what they do:

CommandDescriptionExample
docker --versionCheck Docker versiondocker --version
docker build -t image-name .Create a Docker image from a Dockerfiledocker build -t todo-app .
docker imagesList all Docker imagesdocker images
docker run image-nameRun a container from an imagedocker run todo-app
docker psSee running containersdocker ps
docker stop container-idStop a running containerdocker stop abc123
docker rm container-idRemove a containerdocker rm abc123
docker rmi image-nameRemove an imagedocker rmi todo-app
docker pull image-nameDownload an image from DockerHubdocker pull node
docker push image-nameUpload an image to DockerHubdocker push lavkumar/todo-app
docker loginLog in to DockerHubdocker login
docker-compose upStart multiple containers defined in a filedocker-compose up

Why Docker is Important in DevOps

In DevOps, automation and consistency are key. Docker plays a huge role:

Real-World DevOps Benefits:

BenefitExplanation
ConsistencyApp runs the same on every machine
🚀 Faster DeploymentApps are ready-to-run in seconds
🔁 Easy RollbacksRevert to a previous image if new version fails
🔒 Isolated TestingTest in clean environments without affecting your system
🔁 AutomationPerfect for CI/CD pipelines in GitHub Actions, Jenkins, GitLab CI, etc.

Sample DockerHub Workflow

1. Build your Docker image:

docker build -t lavkumar/todo-app .

2. Push to DockerHub:

docker login
docker push lavkumar/todo-app

3. Pull and Run from DockerHub (on any machine):

docker pull lavkumar/todo-app
docker run lavkumar/todo-app

What is a Docker Image?

A Docker image is like a blueprint or template for creating containers.

Think of it as:

🧁 A recipe for making cupcakes.

The recipe (Docker image) tells you:

  • What ingredients to use (code, dependencies)

  • How to cook them (configuration, environment variables)

  • What the final cupcake should look like (application behavior)

🔍 Key Properties of Images:

  • Read-only: You can’t change an image once it’s built.

  • Portable: You can move or share it via DockerHub or a registry.

  • Reusable: One image can create many containers.

Example:

You build an image for a Node.js app:

DockerfileCopyEditFROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "app.js"]

Then you build it:

bashCopyEditdocker build -t my-node-app .

This my-node-app is your Docker image.

What is a Docker Container?

A Docker container is a running instance of a Docker image.

Using our cupcake analogy:

🍰 A container is an actual cupcake made using the recipe (image).

You can run, stop, restart, or delete containers — they are:

  • Mutable: You can change things while it’s running.

  • Isolated: Each container runs in its own environment.

  • Ephemeral: You can stop and destroy containers without affecting the image.

🔍 Key Properties of Containers:

  • Created from images using docker run

  • Can be started and stopped

  • Have their own filesystem and network

🚀 Example:

You run a container from your image:

bashCopyEditdocker run -p 3000:3000 my-node-app

Now:

  • The image is my-node-app

  • The container is the running version of that image, accessible on port 3000

Real-World Analogy: Movie DVD

  • 🎬 Image = The DVD of a movie

    • You can keep the DVD, share it, and make copies.
  • 📺 Container = Watching the movie on your TV

    • You press play (run), watch the movie (app), and when you stop it, the DVD is still there.

Common Commands

For Docker Images:

bashCopyEditdocker build -t myapp .       # Build an image
docker images                 # List all images
docker rmi myapp              # Delete an image

For Docker Containers:

bashCopyEditdocker run myapp              # Run a container from image
docker ps                     # List running containers
docker stop <container_id>    # Stop a container
docker rm <container_id>      # Remove a container

What is Port Mapping in Docker?

When you run a Docker container, the application inside the container usually listens on a port (like 3000 or 8080). But your computer (host system) doesn't automatically know about that port.

That’s where port mapping comes in.

It’s a way to link a port on your local machine (host) to a port inside the Docker container.

Why is Port Mapping Important?

Without port mapping:

  • Your containerized app might be running fine.

  • But you won’t be able to access it from your browser or tools.

With port mapping:

  • You can open your browser and go to localhost:3000 to access your app running inside Docker.

Simple Analogy

Imagine a shipping container (Docker container) has a door inside labeled "Port 3000".

But unless you map it to a door on your building (your host machine), no one can go inside.

Mapping port 3000 in the container to port 3000 on your computer is like aligning both doors so visitors (like your browser) can enter.

Syntax of Docker Port Mapping

bashCopyEditdocker run -p <host-port>:<container-port> image-name

Example:

bashCopyEditdocker run -p 3000:3000 my-todo-app

This tells Docker:

  • Open port 3000 on my local machine (host)

  • Forward all traffic to port 3000 inside the container


Real Example with a Node.js App

Let’s say you have an app that runs on port 3000 in app.js:

jsCopyEditapp.listen(3000, () => {
  console.log('Server running on port 3000');
});

Your Dockerfile might look like:

DockerfileCopyEditFROM node:18
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]

Now build and run:

bashCopyEditdocker build -t my-todo-app .
docker run -p 3000:3000 my-todo-app

Now go to http://localhost:3000 — your app is live!

What Does EXPOSE Do?

In the Dockerfile:

DockerfileCopyEditEXPOSE 3000
  • It documents which port the container is listening on.

  • It does not publish the port to the host.

  • You still need -p or --publish during docker run.


Mapping to a Different Host Port

You can even map to a different port on your machine:

bashCopyEditdocker run -p 8080:3000 my-todo-app

This means:

  • Your app inside the container runs on port 3000

  • But on your machine, it’s accessible at http://localhost:8080


Multiple Port Mappings

You can map multiple ports:

bashCopyEditdocker run -p 8080:80 -p 8443:443 my-nginx-app

This example:

  • Maps HTTP (port 80) to 8080

  • Maps HTTPS (port 443) to 8443


What if You Don't Map a Port?

If you don’t use -p, your app will still run inside the container, but:

  • You won’t be able to reach it from your browser or API tools.

  • Only other containers on the same Docker network can talk to it.


Common Use Cases

Use CaseCommandAccess URL
Run a Node app on same portdocker run -p 3000:3000 myapplocalhost:3000
Run React app on different portdocker run -p 5000:3000 myreactapplocalhost:5000
Run NGINX with HTTPSdocker run -p 443:443 nginxhttps://localhost

What is a Dockerfile?

A Dockerfile is a plain text file that contains a set of instructions used to build a Docker image.

Think of it like a recipe: each line tells Docker what to do — like "install this", "copy that", "run this command".


Why is a Dockerfile Important?

With a Dockerfile, you can:

  • Automate the setup of your application

  • Ensure consistency across environments

  • Create portable containers that work the same everywhere

It removes the “it works on my machine” problem.

Sample Dockerfile for a Node.js App

DockerfileCopyEdit# 1. Use an official base image
FROM node:18

# 2. Set working directory inside the container
WORKDIR /app

# 3. Copy dependency files first
COPY package*.json ./

# 4. Install dependencies
RUN npm install

# 5. Copy rest of the app code
COPY . .

# 6. Tell Docker what port your app listens on
EXPOSE 3000

# 7. Command to start the app
CMD ["node", "app.js"]

Let’s Understand Each Line


🔹 1. FROM node:18

Starts from a base image — here, Node.js version 18

  • Docker doesn't install Node from scratch; it uses an official pre-built image from DockerHub.

  • You can use other base images too, like:

    • python:3.10

    • ubuntu:22.04

    • nginx:latest


🔹 2. WORKDIR /app

Sets the working directory inside the container

  • All the following commands will run in /app.

  • If the folder doesn’t exist, Docker creates it.

Like doing cd /app before each command.


🔹 3. COPY package*.json ./

Copies only package.json and package-lock.json to the container

  • These files contain dependencies.

  • Copying them separately allows Docker to cache this layer.

Why? If your app code changes but dependencies don’t, Docker won’t re-run npm install.


🔹 4. RUN npm install

Installs project dependencies inside the container

  • It runs npm install just like you do locally.

  • This is baked into the Docker image.


🔹 5. COPY . .

Copies the rest of your project files into the container

  • . = current directory on your machine

  • Second . = current directory inside the container (/app from WORKDIR)


🔹 6. EXPOSE 3000

Tells Docker your app runs on port 3000

  • This is documentation for people and tools (like Docker Compose).

  • Does not actually publish the port (you still need -p 3000:3000 when running).


🔹 7. CMD ["node", "app.js"]

Defines the command to run when the container starts

  • Docker runs this command when you launch the container.

  • If you were running Python, it could be: CMD ["python3", "main.py"]


Bonus: Extended Example (React + Node + Mongo)

Here’s a breakdown of another Dockerfile structure for a React frontend:

DockerfileCopyEdit# Build phase
FROM node:18 AS build
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build

# Production phase
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html

This uses multi-stage builds:

  • First stage: build the React app

  • Second stage: serve it with a lightweight NGINX server

Example Build & Run

bashCopyEditdocker build -t my-node-app .
docker run -p 3000:3000 my-node-app

docker push — Upload an Image to DockerHub

Example:

bashCopyEditdocker build -t my-node-app .
docker tag my-node-app lavkumar/my-node-app:1.0
docker push lavkumar/my-node-app:1.0

You can now find it at https://hub.docker.com/r/lavkumar/my-node-app

docker pull — Download an Image from DockerHub

Syntax:

bashCopyEditdocker pull username/image-name:tag

Example:

bashCopyEditdocker pull lavkumar/my-node-app:1.0

This will download the image and you can then run it:

bashCopyEditdocker run -p 3000:3000 lavkumar/my-node-app:1.0

What is a Docker Layer?

Every Docker image is made up of layers.

A layer is a step in the Dockerfile — each instruction like FROM, COPY, RUN creates a new layer.

These layers are stacked on top of each other to form a final image.

Think of it like this:

  • Each layer is like a slice in a sandwich.

  • Docker stacks those layers to build your app image.

  • Instead of making the entire sandwich from scratch every time, Docker reuses slices (layers) that haven’t changed!


Example Dockerfile with Layers

DockerfileCopyEditFROM node:18            # Layer 1
WORKDIR /app            # Layer 2
COPY package.json .     # Layer 3
RUN npm install         # Layer 4
COPY . .                # Layer 5
CMD ["node", "app.js"]  # Layer 6

This creates 6 layers.


Key Benefits of Layers


1. Layer Caching = Faster Builds

Docker remembers unchanged layers, so it skips re-building them.

Example:

  • If you only change your app code, Docker won’t re-install dependencies (RUN npm install), saving time.

2. Storage Efficiency

Common layers between multiple images are shared.

For example:

  • If 3 images use node:18 as base, that layer is downloaded once and reused.

3. Faster Image Downloads

When pulling an image, Docker only downloads the layers your system doesn't already have.

If you already have base image layers, Docker skips downloading them.


4. Immutable Layers

Layers are read-only.

  • Docker ensures consistency because once a layer is created, it doesn’t change.

  • Only the top writable layer can be changed while running (called the container layer).


5. Layer Reuse in CI/CD Pipelines

CI/CD tools like GitHub Actions or GitLab CI benefit from layer caching:

  • Speeds up Docker builds in pipelines

  • Avoids repeating the same work (like dependency installation)


Optimization Tips Using Layers

To get the most benefit from layers:

TipWhy
Place COPY package.json and RUN npm install before copying all codeSo Docker reuses layers if code changes but dependencies don’t
Combine related commands using &&To reduce the number of layers
Use .dockerignorePrevent copying unnecessary files that would create new layers

Real Example (Optimized)

DockerfileCopyEditFROM node:18
WORKDIR /app

# Only copy dependency files
COPY package*.json ./
RUN npm install

# Now copy the rest of the code
COPY . .

EXPOSE 3000
CMD ["node", "app.js"]

This Dockerfile will rebuild only the bottom layers if your code changes but not your dependencies.


What is a Volume in Docker?


Definition

A Docker volume is a special folder outside the container’s filesystem that stores data. It is used to persist data even if the container is stopped, deleted, or rebuilt.


Real-Life Analogy

Think of a container as a temporary hotel room. If you keep your important documents inside, they’ll be thrown away when you check out.
But if you put them in a locker outside the room (volume), you can reuse them later — no matter how many rooms (containers) you use.


❌ Problem Volumes Solve

By default, data inside a container is lost when:

  • The container is removed

  • The container crashes

  • The image is rebuilt

✅ Docker volumes solve this by:

  • Storing data outside the container

  • Sharing data between containers

  • Persisting database files, logs, user uploads, etc.


Use Case Examples

  • Save user-uploaded files in a web app

  • Persist database data (e.g. MongoDB, MySQL)

  • Store logs or configuration files

  • Share files between multiple containers


Steps to Use Docker Volumes


Step 1: Create a Volume

docker volume create mydata

This creates a volume named mydata.


Step 2: Use Volume in a Container

docker run -v mydata:/app/data my-image
  • mydata: the volume (on host)

  • /app/data: the path inside the container

Anything the container writes to /app/data is stored in mydata on your system.


Example with Node.js

Let’s say your app writes logs to /app/logs.

docker run -p 3000:3000 -v logs:/app/logs my-node-app

Now even if the container is deleted, the logs remain in logs volume.


Step 3: List Volumes

docker volume ls

Step 4: Inspect Volume

docker volume inspect logs

Shows you where the volume lives on your machine.


Step 5: Remove Volume (optional)

docker volume rm logs

Only use this if you’re sure you don’t need the data anymore

Example: MongoDB with Volume

bashCopyEditdocker volume create mongo-data
docker run -d -p 27017:27017 -v mongo-data:/data/db mongo

Now your MongoDB database is saved in mongo-data, even if the container is deleted.


Docker Networks


What is a Docker Network?

A Docker network allows containers to talk to each other securely, either:

  • Inside the same system (host)

  • Across multiple hosts (advanced setups)

Think of it like giving containers a private LAN (Local Area Network) to connect with each other.


❌ Problem Docker Networks Solve

Without Docker networks:

  • Containers can’t easily find or communicate with each other.

  • You’d have to use complex IP addresses manually.

  • There’s no isolation, so one container could accidentally expose services to the public.


✅ Docker Network Solves This By:

FeatureBenefit
🔐 IsolationContainers only connect if on same network
🌐 Service DiscoveryContainers can talk using names, not IPs
🔁 CommunicationContainers can share data/services privately
🛡️ SecurityRestrict access to only specific containers

🔥 Real-World Example

You have:

  • A Node.js app (container A)

  • A MongoDB database (container B)

They need to talk to each other privately.

Without Docker network:
❌ You’d struggle with IPs and expose ports publicly.

With Docker network:
✅ They talk easily using container names like mongo and app.


Types of Docker Networks

TypeDescription
bridgeDefault, for communication between containers on the same host
hostShares host's network (no isolation)
noneNo networking
overlayFor multi-host communication (used in Docker Swarm)

For most projects, you’ll use bridge networks.


🪜 Steps to Use Docker Networks (with Commands)


✅ Step 1: Create a Network

bashCopyEditdocker network create my-network

Creates a custom bridge network named my-network.


✅ Step 2: Run Containers on the Same Network

Example: Mongo + Node App

bashCopyEditdocker run -d --name mongo-db --network my-network mongo
bashCopyEditdocker run -d --name node-app --network my-network my-node-image

✅ Now node-app can connect to Mongo using hostname mongo-db:

jsCopyEditmongoose.connect('mongodb://mongo-db:27017/mydb')

No IP needed — Docker resolves the name automatically.


✅ Step 3: Inspect Network (Optional)

bashCopyEditdocker network inspect my-network

Shows all containers connected to it and IPs.


✅ Step 4: List All Networks

bashCopyEditdocker network ls

✅ Step 5: Connect Existing Container to Network

bashCopyEditdocker network connect my-network my-container

✅ Step 6: Disconnect a Container

bashCopyEditdocker network disconnect my-network my-container

🧠 Why Use a Custom Network Instead of Default?

The default bridge network doesn’t allow name-based container-to-container communication.

🧪 Containers can only communicate via IP on the default network, not via container names.

Custom network = container name DNS + better security + better organization.


🔄 Docker Compose Makes This Easier

Example docker-compose.yml:

yamlCopyEditversion: '3'
services:
  mongo:
    image: mongo
  app:
    image: my-node-app
    depends_on:
      - mongo

👉 Here, Docker Compose automatically creates a private network, and the app service can access Mongo at mongo:27017.

Node Dev Example (In CLI)

bashCopyEditdocker network create node-net

docker run -d --name mongo-db --network node-net mongo

docker run -d --name node-app --network node-net my-node-app

Inside my-node-app, use:

jsCopyEditmongoose.connect("mongodb://mongo-db:27017/mydb");

✅ Works with no IPs or public exposure.

10
Subscribe to my newsletter

Read articles from Lav kushwaha directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Lav kushwaha
Lav kushwaha