Optimizing Docker Image by Multi stage docker build

Introduction
Docker is a popular containerization platform that allows developers to package applications and their dependencies into lightweight, portable containers. These containers ensure consistent performance across different environments, making application development, testing, and deployment more streamlined and efficient.
Despite its advantages, Docker containers can become unnecessarily large if not managed properly. Bloated images lead to higher storage use, slower deployments, and increased network transfer times—especially in cloud environments. This inefficiency undermines the very benefits Docker is meant to provide.
Docker Architecture
● Docker Client:
The user interface that communicates with the Docker daemon using commands like docker build, docker run, etc. It can interact with remote or local Docker hosts.
● Docker Daemon (dockerd):
A background process that manages Docker objects like images, containers, networks, and volumes. It listens for client requests and handles container lifecycle operations.
● Docker Images:
Read-only templates used to create containers. Images are built using a Dockerfile and can be stored locally or pulled from registries like Docker Hub.
● Docker Containers:
Lightweight, executable units that run applications. They are created from images and run in isolated environments with their own filesystem, CPU, memory, and processes.
● Docker Registries:
Centralized locations where Docker images are stored and shared. Public registries like Docker Hub or private registries are used to pull/push images.
● Docker Engine:
The core component that combines the client, daemon, and container runtime to build and run containers on the host OS.
Docker Image - Single Stage
Single-stage builds are the traditional way of building Docker images. In this approach, all steps—such as installing dependencies, compiling code, and packaging the application are done in one stage of the Dockerfile.
While simple and easy to implement, this method results in larger images because unnecessary build tools and temporary files are included in the final image. This affects efficiency, resource usage, and deployment speed.
This leads to:
● Slower deployment times
● Higher storage and bandwidth usage
● Increased attack surface (more files = more vulnerabilities)
● Poor scalability in production environments
Docker Image - Multi Stage
In a multi-stage Docker setup, we split the process into two distinct stages: the build stage and the production stage. This approach allows us to create a smaller and more efficient final Docker image by separating the tasks of building the application and serving it in production.
1. Build Stage (Node.js)
● Install dependencies: Use the Node base image to copy package.json and run npm install
to get all required packages.
● Build the app: Run npm run build to create a production-ready build folder with optimized static files.
● Why it's useful: Keeps dev dependencies and tools out of the final image, making it clean and lightweight.
2. Production Stage (Nginx)
● Use Nginx image: A minimal image to serve static files efficiently.
● Copy build output: Bring only the build folder from the first stage, excluding all unnecessary files.
● Expose port 80: Enables access to the web application.
● Run Nginx: Uses CMD ["nginx", "-g", "daemon off;"] to serve the app continuously.
Benefits of Multi-Stage Docker
Smaller Image Size : Only necessary files (like production build output) are included in the final image.
Faster Deployment : Lightweight images are quicker to transfer and start.
Improved Security : No development tools or source code in the production image.
Better Performance : Nginx serves static files efficiently with low memory and CPU usage.
Simplified CI/CD : Cleaner Dockerfiles make automation and pipelines easier to manage.
Separation of Concerns : Keeps build and production environments isolated within the same Dockerfile.
Results & Discussion
The single-stage build resulted in a significantly large image size of 1.43 GB, which can slow down deployment and consume more storage and bandwidth.
By contrast, using a multi-stage build reduced the image size to just 50.3 MB, achieving a reduction of approximately 96.48%.
Deployment (CD part - Continuous Delivery)
Create Optimized Docker Image Use a multi-stage Dockerfile to build a minimal, production-ready container.
Push to Container Registry Push the built image to Docker Hub, GitHub Container Registry, or a private registry.
Pull the Image on Kubernetes Node Kubernetes automatically pulls the image from the registry to the node when deploying the
pod.
Define Kubernetes Manifests Create Deployment.yaml and Service.yaml files that describe how your application
should run in the cluster.
Apply to Kubernetes Cluster Run kubectl apply -f deployment.yaml to deploy the application using the pulled
image.
Subscribe to my newsletter
Read articles from Tuhin Saha directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
