Reduce your docker image size by 90%

Prabhat ChouhanPrabhat Chouhan
5 min read

6 Ways to Reduce Your Docker Image Size (From Bloated to Slim)

Docker is one of the best tools for packaging applications. It lets you put your app, libraries, and dependencies into a single unit called an image. You can then run that image anywhere as a container.

But there’s a common problem:
👉 Docker images can easily become too large (bloated) if we don’t optimize them.

A large image has consequences:

  • Slower builds → Every time you build in CI/CD, it takes longer.

  • Slower deployments → Uploading and downloading images takes forever.

  • Wasted storage → Your server or laptop fills up with huge images.

  • Security risks → More packages mean more chances for vulnerabilities.

So, reducing the size of Docker images is not just about saving space. It’s about speed, efficiency, and security.

Let’s go through 6 proven techniques that will help you go from bloated to slim.


1. Use a Smaller Base Image

Every Dockerfile usually starts with a base image like this:

FROM ubuntu:latest

But here’s the catch — Ubuntu is heavy (~70 MB), and you might not need most of the tools inside it.

Instead, you can choose lighter base images such as:

  • alpine:latest → ~5 MB (super light)

  • debian:slim → ~22 MB (a balance between small and compatible)

👉 Example:

# Bloated
FROM ubuntu:latest

# Slim
FROM alpine:latest

Why it matters:
A smaller base image means you start small from the beginning. Imagine building a house — if the foundation is already huge and heavy, everything else on top will also be heavy.

⚠️ Note: Alpine is very small, but sometimes certain libraries don’t work properly because it uses a different standard library (musl instead of glibc). If you face issues, debian:slim is a safer choice.


2. Use Multi-Stage Builds

When you build applications (like Go, Java, or Node.js apps), you often need extra tools like compilers, build tools, or package managers.

But here’s the question:
👉 Do you need those tools in production?
No! You only need the final binary or build output.

Multi-stage builds let you separate build time and runtime into two stages.

👉 Example:

# Stage 1: Build
FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp

# Stage 2: Run (only the binary, no Go compiler)
FROM alpine:latest
WORKDIR /app
COPY --from=builder /app/myapp .
CMD ["./myapp"]

Here’s what happens:

  • Stage 1: Uses a large Go image to compile the app.

  • Stage 2: Uses tiny Alpine, only copies the binary.

Result: Instead of shipping a 1GB image with compilers, you get a 20MB image with just your app.


3. Remove Unnecessary Packages & Files

Sometimes you need to install tools temporarily, but you forget to remove them. This adds unnecessary weight.

👉 Example:

# Wrong: keeps build tools forever
RUN apk add build-base
RUN make build

Now the final image still has build-base (compilers, headers, etc.).

Better approach:

RUN apk add --no-cache build-base \
  && make build \
  && apk del build-base \
  && rm -rf /var/cache/apk/*

Here:

  • We install build tools

  • Use them to build the app

  • Immediately remove them

  • Clear cache

Why it matters: If you don’t clean up, every container will carry junk it doesn’t need. That junk increases image size and security risks.


4. Use .dockerignore to Exclude Unwanted Files

When you run docker build ., Docker sends everything in your project folder into the image context.

This often includes:

  • .git folder (can be hundreds of MBs)

  • node_modules (already installed again inside container)

  • Local .env files

  • Temporary logs

👉 Example .dockerignore:

.git
node_modules
*.log
.env
Dockerfile
README.md

Why it matters:
Think of .dockerignore as a filter. It ensures that only the files you actually need end up inside the image. Without it, you’re carrying around a backpack full of rocks you’ll never use.


5. Combine & Minimize Layers

Each Docker instruction (RUN, COPY, ADD) creates a new image layer. More layers = more size.

👉 Example of bad layering:

RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git

That’s three layers.

Better way:

RUN apt-get update && apt-get install -y \
    curl \
    git \
  && rm -rf /var/lib/apt/lists/*

Now it’s just one layer and we clean up after installation.

Why it matters:
Layers are cached, but they also take space. Combining instructions reduces wasted space and speeds up builds.


6. Use Slim Variants of Language Images

Most programming languages provide different image sizes. The default/full versions are very large because they include documentation, debugging tools, and extra utilities.

👉 Example:

  • python:3.12 → ~900 MB

  • python:3.12-slim → ~40 MB

  • node:20 → ~900 MB

  • node:20-slim → ~40 MB

👉 Example for Python:

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]

Why it matters:
By using the slim variant, you instantly save hundreds of MBs while still running your app normally.


✅ Final Thoughts

Large Docker images are not just an annoyance — they directly impact speed, security, and cost.

By following these 6 techniques:

  1. Choose smaller base images

  2. Use multi-stage builds

  3. Remove unnecessary packages/files

  4. Exclude junk with .dockerignore

  5. Combine and minimize layers

  6. Use slim language images

…you can often shrink an image by 50–90%.

👉 Bonus Tip: Run this command before and after optimization:

docker images

You’ll see the difference in MBs or GBs. Nothing is more satisfying than watching a 2GB image shrink to 150MB.

Optimizing Dockerfiles is a small effort that pays off big in CI/CD speed, cloud costs, and team productivity.


Would you like me to now add a real-life case study (like optimizing a Node.js or Python app from 1.2GB → 120MB with exact steps and screenshots)? That would make your Hashnode blog stand out as both educational and practical.

1
Subscribe to my newsletter

Read articles from Prabhat Chouhan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Prabhat Chouhan
Prabhat Chouhan

I’m a Cloud/DevOps enthusiast currently learning how to build and manage reliable, scalable solutions. I’m excited about exploring modern technologies and best practices to streamline development and deployment processes. My aim is to gain hands-on experience and contribute to creating robust systems that support growth and success in the tech world.