Comprehensive Introduction to Docker: Understanding Basics, Images, and Networking

Sattyam SamaniaSattyam Samania
9 min read

Have you ever heard or said these things?

  • “It works on my machine, but not on the server!”

  • “Setting up the development environment is taking forever.”

  • “Deploying this app to production broke everything!”

  • “Why is it so hard to replicate the same environment for every team member?”

These were common headaches for developers, DevOps engineers, and teams working on software projects before Docker came into the picture. Managing dependencies, dealing with inconsistent environments, and deploying across different systems was a frustrating and time-consuming task.

That’s where Docker changed everything.

From Virtual Machines to Containers: Understanding the Evolution

Before diving into Docker and containers, let’s take a step back and understand virtualization, because containers didn’t just appear out of nowhere. They evolved from how we used to run software reliably across machines.

🖥️ What is Virtualization?

Virtualization is the process of running multiple virtual machines (VMs) on a single physical machine. Each VM has its own:

  • Operating System (OS)

  • CPU allocation

  • Memory

  • Storage

Tools like VMware, VirtualBox, and Hyper-V made this possible by using a hypervisor to manage and isolate each VM.

The Problem with VMs

While VMs solved some problems, they also introduced new ones:

  • Heavyweight – Each VM runs a full OS, which consumes a lot of resources

  • Slow startup – Booting a VM can take minutes

  • Difficult to scale – Spinning up multiple VMs for microservices is inefficient

  • Hard to maintain – Each VM needs its own patching and OS updates

Containers came in as the next step. Instead of virtualizing hardware like VMs, containers virtualize the operating system. That means:

  • They’re lightweight, no need for a full OS in each container

  • They start in seconds

  • They package your code + dependencies together

  • They’re perfect for CI/CD pipelines, microservices, and cloud deployment

Tools like Docker made it super easy to use containers, giving developers the ability to build, run, and ship software consistently across any environment.

🐳 What is Docker?

Imagine you're building an app, and it works perfectly on your laptop. But when you send it to your teammate,or deploy it to a server, it breaks.

Why?
Because your machine and their machine don’t have the exact same environment: maybe you're using Node.js 18, and they're using Node.js 16. Or you have certain libraries installed that they don’t.
These small differences create big problems.

That’s where Docker comes in.

🧊 Docker in Simple Words

Docker is a tool that helps you package your application and everything it needs, code, libraries, system tools, settings, into a small, portable box called a container.

This container can run anywhere, whether it's your laptop, your teammate’s system, or a cloud server and it will behave the exact same way. No surprises.


📦 So, What’s a Container?

A container is like a lightweight, standalone mini-computer that runs just your app and its dependencies. Unlike virtual machines, containers don’t need a full OS, they share the host system’s kernel, making them faster, smaller, and more efficient.

💡 Think of it like a shipping container: it can carry anything (your app), and no matter which ship or port it goes to (your system, a server, or the cloud), it fits and works exactly the same.


🔧 What Can You Do With Docker?

  • Run and test apps locally in isolated environments

  • Build once, deploy anywhere (no more “works on my machine”)

  • Spin up databases and services instantly

  • Create lightweight microservices

  • Use pre-built environments from Docker Hub (like Node, Python, MySQL, etc.)

Docker Architecture

1. Docker Client

🧍‍♂️ You, the user, interact with Docker using the Docker Client.

Whenever you run a Docker command in your terminal, like docker build, docker run, or docker pull, you’re using the Docker Client.


2. Docker Daemon (dockerd)

Think of this as the brain or engine of Docker.

The Docker Daemon runs in the background and listens for commands from the Docker Client. It’s responsible for building, running, and managing containers.

What is containerd?

containerd is the actual container runtime used by Docker to manage the lifecycle of containers.

When you run:

docker run nginx

Here’s what happens:

  • The Docker Daemon receives your command from the Docker Client

  • Then it hands over the task to containerd, which actually:

    • Pulls the image

    • Unpacks it

    • Creates and starts the container

    • Manages the container process (start, stop, pause, resume, etc.)

💡 Analogy:
If Docker Daemon is the manager, then containerd is the worker actually doing the container tasks.


3. Docker Images

A Docker Image is like a snapshot or blueprint of your application and everything it needs.

Images contain your app, the OS libraries, dependencies, everything needed to run.
You can think of it as a frozen template. When you run an image, you get a container.

Example:

docker pull node:18

This pulls a Docker image of Node.js version 18 from the registry.

4. Docker Registry

A place to store and share images.

Docker Hub is the default public registry, but you can also use private registries like:

  • AWS ECR

  • Azure Container Registry

  • GitHub Container Registry

When you pull an image like nginx, Docker fetches it from the registry.


5. Docker Host

The machine where Docker is installed and running.

It could be your local system or a server in the cloud. The Docker Daemon runs on the host, and containers/images live here.


How It All Works Together:

Let’s say you want to run a Node.js app in Docker. Here's the journey:

  1. You (Client):
    Run docker build using Docker CLI or Docker Desktop → sends instruction to Daemon

  2. Docker Daemon:
    Builds image based on your Dockerfile

  3. Image is created and stored on the host

  4. Run the image:

    Run docker run <image_name>
    Docker creates a container from that image

  5. Need an existing image?
    Run docker pull → Daemon fetches it from Docker Hub (Registry)

  6. Now you can run, stop, restart the container anytime

Building Docker Images with Dockerfile: A Step-by-Step Guide

So far, we've learned what Docker is, how it works behind the scenes, and why it's a game-changer. Now, let’s get practical, it’s time to build our own Docker image using something called a Dockerfile.


📄 What is a Dockerfile?

A Dockerfile is a simple text file that contains a list of instructions telling Docker how to build a custom image for your application.

💡 Analogy: Think of a Dockerfile as a recipe. It defines:

  • What ingredients to use (base image)

  • What steps to take (install packages, copy files)

  • What to serve (how your app runs)


Basic Structure of a Dockerfile

Here’s a simple Dockerfile for a Node.js app:

# Use official Node.js base image
FROM node:18

# Set working directory
WORKDIR /app

# Copy package.json and install dependencies
COPY package*.json ./
RUN npm install

# Copy the rest of the code
COPY . .

# Expose the app port
EXPOSE 3000

# Start the app
CMD ["node", "index.js"]

Let’s break that down:

Dockerfile CommandMeaning
FROMBase image (e.g., node, python, nginx)
WORKDIRWorking directory inside container
COPYCopy files from your system to the image
RUNRun shell commands (e.g., install dependencies)
EXPOSETell Docker which port the app listens on
CMDDefault command to run when the container starts

🏗️ How Docker Builds an Image from Dockerfile

When you run docker build, Docker:

  1. Reads your Dockerfile line by line

  2. Executes each instruction in a separate layer

  3. Caches each layer to speed up future builds

  4. Bundles everything into a final image

This image is stored locally and can be used to spin up containers anytime, anywhere.

After that when we run docker build -t my-node-app . we get the Image from the Dockerfile which we can check using docker images.

Once the image is built then we can use that image to run a container. The command for creating containers from Image is docker run my-node-app and you can check if the container is running or not using docker ps. If you want to check all container whether they are running, stopped we can check with this command docker ps -a.

Docker Volumes – Persisting Data in Containers

Why Do We Need Volumes?

By default, when a Docker container shuts down or is deleted, all its internal data is lost. This is because containers are short-lived. But what if you're running a database or saving user uploads? You need persistent storage, right? and that's where Docker Volumes come in.

What is a Docker Volume?

A Docker Volume is a special directory stored outside the container, managed by Docker itself. Even if the container is deleted, the data in the volume persists.

How to Use Docker Volumes?

1. Creating a Volume:

docker volume create mydata

2. Using a Volume in a Container:

docker run -d -v mydata:/app/data myimage

This command maps the mydata volume to the /app/data folder inside the container.

🔁 You can also use bind mounts (-v /local/path:/container/path), but volumes are better for portability and performance.

When to Use Volumes?

  • For databases (e.g., PostgreSQL, MongoDB)

  • For application logs

  • For user-uploaded files

  • For sharing data between containers


Docker Networking – Connecting Containers

Why Docker Networking?

Containers often need to talk to each other, like a frontend container talking to a backend or a backend connecting to a database. Docker provides built-in networking to make this easy and secure.

Docker Network Types

1. Bridge (default)

When you run a container, Docker puts it on a default isolated network. You can manually connect containers to the same bridge network to enable communication.

docker network create mynetwork
docker run -d --network=mynetwork --name db postgres
docker run -d --network=mynetwork --name app myapp

Now app can access db via its container name.

2. Host

Shares the host machine’s network stack. Useful for performance or when working with certain applications, but no isolation.

docker run --network host myimage

3. None

No network access at all. Used for security or special cases.

4. Overlay

Used in Docker Swarm mode to enable multi-host communication.

Conclusion

Docker has completely transformed how developers build, ship, and run applications. From solving the "it works on my machine" problem to enabling consistent environments across development and production, Docker brings simplicity and power to software development.

In this blog, we covered:

  • The problems before Docker

  • How virtualization differs from containerization

  • What Docker is and how it works

  • Docker architecture, Dockerfiles, images, containers

  • Docker volumes and networking

👉 Stay tuned, and happy containerizing! 🚀


🔗 Follow My Journey

If you found this helpful and want to learn more about DevOps, Cloud, and AI in a beginner-friendly way, follow me on:

Let's learn and grow together! 💻✨

1
Subscribe to my newsletter

Read articles from Sattyam Samania directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sattyam Samania
Sattyam Samania

I am a Passionate Frontend Developer from India. I like to build attractive web pages. Here I will share my journey with all the other folks.