Docker for Beginners – A Beginner’s Approach with Practical Cases

Pankaj  RoyPankaj Roy
15 min read

In today’s software world, apps need to run smoothly everywhere—on a developer’s laptop, a test server, or in production. But here’s the common problem:
“It works on my machine, but not on yours.”

This happens because of missing dependencies, version mismatches, and different operating systems. The solution? Package the app with everything it needs so it runs the same everywhere.

That’s where Docker comes in. Docker uses containerization to make apps portable, lightweight, and easy to deploy. In this guide, we’ll explain Docker in plain English, with real-life examples and diagrams—perfect for beginners.

📦What is a Container?

A container is like a virtual machine, but it works in a slightly different way. We will discuss the differences later in this tutorial. For now, think of a container as a lightweight, portable environment that can run your application along with all its dependencies.

🐳What is Docker?

Docker is a popular tool that helps create and manage these containers.

  • You can think of Docker as a company, and the Docker Engine as the tool that runs containers.

  • Nowadays, people simply call it Docker for convenience.

🔁Why Do We Use Docker?

In a typical software development lifecycle, we have three main stages:
Development → Testing → Operations

Here’s the problem:

  • The development team builds the application on their system.

  • When the testing or operations team tries to run it, they often face dependency issues like:

    • Missing DLL files

    • Missing packages

    • Version mismatches

These issues lead to conflicts between teams and sometimes even a blame game.

🔍How Did We Try to Solve This Before Docker?

Developers tried to write code in such a way that it automatically downloads the required dependencies. If something was missing, the application would throw an error.

Companies even thought about bundling the Operating System (OS) with the application and software so that everything runs smoothly. But sending an entire OS to every customer or team was not practical.

🖥️The Practical Solution: Virtualization

To solve this problem, virtualization became popular. Virtualization allows you to run multiple operating systems on the same hardware using tools like:

  • Oracle VirtualBox

  • VMware Hypervisor

However, virtualization has its own limitations, which we will discuss later when comparing it with containers.

💻Example 1.1: Understanding Virtual Machines

Diagram showing VMware virtualization. Three virtual machines (VM1: Linux, VM2: Windows, VM3: Mac) run on top of a hypervisor (ESXi), which is on the hardware. A speech bubble labeled "Virtualization" points to the VMs.

Let’s assume you have a laptop with 16 GB RAM, 1 TB SSD, and an 8-core CPU.
Now, you create three virtual machines (VMs):

  • VM1: Linux OS → Uses 4 GB RAM and 250 GB storage

  • VM2: Windows OS → Uses 8 GB RAM and 500 GB storage

  • VM3: macOS → Uses 2 GB RAM and 150 GB storage

These VMs take resources directly from your laptop hardware. Once the VMs are ready, you can install the required software inside them.

The best part? You can share these VMs with other teams (like Testing or Operations) by creating a template or image. The testing team can boot these VMs on their systems, and all the software will already be installed. This saves time and ensures consistency.

Improved Real-Life Example

Think of it like online classes on YouTube. Your teacher is teaching virtually, just like in an offline classroom. You can see and learn everything, but you cannot physically touch or interact with the teacher. Similarly, a virtual machine is like a real computer, but it exists virtually inside your system.

🧪What Happens After Testing?

Once the testing team runs the application (for example, a game like PUBG) and finds a bug, they report it to the development team. The developers fix the issue, create a new VM image, and send it to the operations team for final verification.

This process works well, but it has limitations.

Question 1: If Everything Works, Why Do We Need Docker

Here’s the problem:
Virtualization wastes resources. Let’s go back to Example 1.1.
Suppose you want to create VM4 with 8 GB RAM and 500 GB storage, but your laptop only has 2 GB RAM and 100 GB storage left. Even though VM1, VM2, and VM3 have unused space, they cannot share it because each VM has a fixed configuration.

This leads to:

  • Resource wastage

  • Higher costs

Even on cloud platforms like AWS or Azure, you can increase RAM and storage for VMs, but the price goes up significantly.

How Docker Solves This Problem

Docker introduces containers, which are lightweight and share the same OS kernel. Unlike VMs, containers don’t need a full operating system for each instance. This means:

  • Better resource utilization

  • Lower costs

  • Faster performance

🧱Example 1.2: Virtual Machines vs Containers

Diagram illustrating containerization: three containers (CN1, CN2, CN3) run on a Docker Engine, over an OS (e.g., Linux) and hardware. A speech bubble compares Docker Engine to VMware Hypervisor.

When you create a Virtual Machine (VM) using a Hypervisor, you need to install a separate operating system on each VM. For example, in Example 1.1, VM1, VM2, and VM3 each had their own OS (Linux, Windows, macOS). This makes VMs heavy because every VM carries its own OS along with the application.

Now, let’s see what happens with containers.

When you create containers (say CN1, CN2, CN3) using Docker Engine, these containers do not have their own operating system. Instead, they share the host system’s OS kernel. For example, if your laptop runs Linux, all containers will use the Linux kernel.

🔑Real-Life Analogy

Imagine you want to travel to Delhi. Do you need to buy your own car? No! You can use a train, bus, or any other shared transport. Similarly, containers don’t carry their own OS; they use the OS of your system. This makes them lightweight and efficient.

📦Why Containers Are Better

  • Containers consume less space because they don’t include a full OS.

  • When a container finishes its task, it stops and releases the resources back to the system.

  • In the future, if you need to run another container, it will reuse the available hardware resources.

  • Unlike VMs, containers don’t hold resources permanently, which means better resource utilization.

🌐Containerization and Docker Hub

Docker Hub is like a central storage or marketplace for containers and their dependencies. Think of it as Walmart for software components—you can find everything you need in one place.

When you install an application inside a container, Docker intelligently pulls all the required dependencies from Docker Hub. This ensures that your application runs smoothly without missing packages or version conflicts.

⚙️How Docker Works

  • When Docker creates a container, it fetches all necessary dependencies from Docker Hub.

  • Docker acts as a deployment tool that makes application delivery fast and consistent.

  • A container can provide environments like Linux, Ubuntu, or Windows, and on top of that, you can run any application—similar to how you use VMware or AWS EC2.

Once your container is ready, you can create an image of it with all dependencies included and share it with other teams. For example:

  • The Testing Team can run this container image on their system using Docker Engine.

  • If any dependency is missing, Docker will automatically download it from Docker Hub.

🧩OS-Level Virtualization in Docker

  • Docker is an open-source platform designed to create, deploy, and run applications.

  • Docker uses containers on the host OS to run applications. It allows apps to share the same Linux kernel as the host system instead of creating a full virtual OS like VMs in AWS EC2 or VMware Hypervisor.

  • You can install Docker on any OS, but Docker Engine runs natively on Linux. If your system uses Windows, Docker installs a lightweight Linux environment internally.

  • Docker is written in the Go programming language.

  • Docker performs OS-level virtualization, also known as containerization.

  • Before Docker, developers faced the classic problem:
    “It works on my machine, but not on yours.”
    This caused conflicts between teams. Docker solved this by providing a consistent environment everywhere.

  • Docker was first released in March 2013, developed by Solomon Hykes and Sebastian Pahl.

  • Docker is a Platform as a Service (PaaS) that uses OS-level virtualization, whereas VMware uses hardware-level virtualization.

Think of PaaS like this:

  • Jupyter Notebook is used to run Python code without worrying about the underlying OS setup.

  • VS Code allows you to write and run code in multiple languages without manually configuring everything.
    Similarly, Docker provides an environment where your application runs without worrying about OS-level differences.

🖥️VMware Virtualization (Hardware-Level Virtualization)

VMware uses hardware-level virtualization, also known as Type-1 virtualization. In this approach, the hypervisor divides the physical resources (CPU, RAM, storage) among multiple virtual machines (VMs).

However, there’s a limitation:

  • Each VM gets a fixed allocation of resources.

  • These resources cannot be shared with other VMs, even if they are idle.

Diagram illustrating VMware virtualization. It shows three virtual machines (VM1 with Linux, VM2 with Windows, VM3 with Mac) running on a hypervisor (ESXi), which operates on hardware. An arrow labeled "Hardware-based resource allocation" points from the hardware to the setup.

📖Analogy

Think of a nuclear family. Every family member has their own house, ration, and resources. They don’t share resources with other families. Similarly, in VMware virtualization, VM1 cannot share its unused resources with VM2 or VM3.

Now compare this with a joint family. In a joint family, if someone needs extra resources, they can borrow from others. Once they’re done, they return it. This is exactly how containers in Docker work—they share resources dynamically and release them when not in use.

🛠️How Docker Handles Containers and Resources

Diagram illustrating containerization. Two containers, CN1 with Ubuntu and CN2 with Kali Linux, run on a stack including Docker Engine, RHEL, and Hardware. Common files are sourced from RHEL, and remaining files are pulled from Docker Hub. Arrows indicate the flow of files and process.

Let’s take an example:
You have a laptop with 16 GB RAM and 1 TB SSD, and you install RHEL (Red Hat Enterprise Linux) as the operating system. RHEL has full access to all the hardware resources.

Next, you install Docker Engine on this RHEL system. Using Docker, you want to create two containers:

  • C1 → Ubuntu

  • C2 → Kali Linux

🎭What Happens Behind the Scenes

  • When you request Docker to create these containers, Docker does not install a full OS like a VM.

  • Instead, Docker shares the host OS kernel (RHEL kernel) with the containers.

  • Containers only include the user space of Ubuntu or Kali (binaries, libraries, and tools), not their own kernel.

  • Docker checks for common files between the requested image (Ubuntu/Kali) and the host OS (RHEL).

  • Common layers are reused from the host; missing layers are downloaded from Docker Hub.

  • Docker then assembles these layers into a complete image and starts containers C1 and C2.

🔑Key Difference from VMware

  • In VMware virtualization, each VM has its own OS and kernel and takes resources directly from hardware.

  • In Docker, containers share the host kernel and request resources from the host OS, making them lightweight and faster.

📊Resource Management

  • Containers (C1 and C2) request resources from RHEL, not directly from the hardware.

  • This is different from VMware virtualization, where each VM takes resources directly from the hardware and reserves them permanently.

This approach makes Docker lightweight, efficient, and resource-friendly compared to traditional virtualization.

🧠Example 1.3: How Docker Manages Resources Like a Kernel

Diagram showing three layers: green boxes representing Google Chrome, Adobe, and MS Office on top, a beige box labeled "Kernel" in the middle, and a blue box labeled "Hardware" at the bottom, with arrows indicating interaction between them.

On your laptop, when you open Google Chrome, it might need 2 GB of RAM. The kernel acts as a manager—it takes resources from the hardware and allocates them to Chrome or any other application that requests them.

Docker works in a similar way for containers. It manages resource allocation for containers without letting them directly interact with the hardware.

Question 1: Does a Container Have an Operating System

Answer: Yes, but only partially.
A container includes only the files that are not common between the guest OS and the host OS.

For example, if you want to create an Ubuntu container on a Windows laptop, the flow looks like this:
Hardware → Windows → Hypervisor → Ubuntu VM → Docker Engine → Ubuntu Container

This means Docker uses OS-level virtualization, not full hardware virtualization like VMware.

🚀Advantages of Docker

  • No Pre-Allocation of RAM
    Docker does not reserve fixed memory for containers. Resources are allocated dynamically as needed.

  • CI/CD Efficiency
    Docker enables you to build a container image once and use the same image across all stages of deployment.
    Example: The same image can be shared among development, testing, and operations teams.

  • Cost-Effective
    Docker reduces infrastructure costs because it uses fewer resources compared to virtual machines.

  • Lightweight and Fast
    Containers are smaller in size and consume fewer resources, making them faster to start and stop.

  • Runs Anywhere
    Docker containers can run on physical hardware, virtual machines, or cloud platforms without modification.

  • Quick Deployment
    Creating a container takes seconds, unlike VMs which take minutes to boot.

  • Image and Container Concept

    • If an image is running, it’s called a container.

    • If it’s stopped, it’s just an image.

    • You can reuse images multiple times.

  • Example:
    When you request an OS for a container, Docker pulls the required files from Docker Hub and creates a copy for the container. The original image remains available for future use—just like sharing a movie with a friend while keeping your own copy.
    Similarly, you can push your container image to Docker Hub for future use.

  • Modifying Containers
    You cannot modify an image directly, but you can make changes inside a container and then create a new image from it.
    Example:
    If your container has 20 software packages and 200 files, and you uninstall 10 software packages, you can create a new image with the updated configuration.

⚠️Disadvantages of Docker

  • Not Ideal for GUI Applications
    Docker is not the best choice for applications requiring a rich graphical interface. For such cases, use VMware or similar virtualization tools.

  • Difficult to Manage at Scale
    Managing a large number of containers can be complex without orchestration tools like Kubernetes.

  • Limited Cross-Platform Compatibility
    Docker does not provide full cross-platform support.
    Example:
    If you create a container image on Windows, it cannot run on Linux directly.
    If the OS is different, you need to create a VM first and then run Docker inside it.

⚔️Example 1.4: When to Use Docker vs Virtual Machines

Diagram showing two container stacks, each with three green containers labeled CN1, CN1, and CN3, on top of a Docker Engine, Linux, and Hardware. An arrow between them reads, "Matching host OS allows the container to run."

  • Docker is suitable when the development OS and testing OS are the same.
    If the operating systems are different, it’s better to use a Virtual Machine (VM) for compatibility.

Diagram illustrating a container incompatibility issue. The left side shows containers (CN1, CN1, CN3) running on a Linux-based Docker Engine, while the right side shows the same containers on a Windows-based Docker Engine. An arrow with text indicates a "Host OS mismatch means the container can’t run."

  • No Built-in Solution for Data Recovery and Backup
    Docker does not provide a native mechanism for data recovery or backup. You need to implement external solutions for persistent storage and backup strategies.

🏗️Docker Architecture Explained

Diagram illustrating Docker architecture. A developer creates a Docker file, which is processed by a Docker engine running on an Ubuntu-Jenkins VM to create an image. This image is stored in Docker Hub as a registry. Containers for development, QA, and operations pull the image from Docker Hub.

Imagine a developer writing a Dockerfile that contains all the instructions, dependencies, and required software to build an image. Once the image is built, the developer runs it, and a container is created.

To ensure consistency across environments, the developer pushes this image to Docker Hub—a centralized registry where all Docker images are stored. Other teams, like Testing or Operations, can then pull the same image from Docker Hub and run it on their local machines. This guarantees that everyone uses the exact same environment, eliminating the classic “works on my machine” problem.

What is Docker Hub?

Docker Hub is like GitHub for container images. It’s a registry/storage platform where images are stored and shared among teams. Anyone with access can pull these images and run them locally.

📚Layered File System in Containers

A container uses a layered file system. Each layer represents a set of changes (like installing software or adding files) made during the image creation process. When you build an image, Docker stacks these layers one on top of another.

Real-Life Analogy:
Think of it like building a burger:

  • The base bun is your base image (e.g., Ubuntu).

  • Each topping (cheese, lettuce, sauce) is a layer added on top.

  • The final burger is your container image.

When you run the container, Docker uses these layers efficiently, reusing common layers across multiple containers to save space and speed up deployment.

🌍Docker Ecosystem Explained

Diagram showing the Docker Ecosystem structure. Arrows point from "Docker Ecosystem" to components: Docker Client, Docker Hub, Docker Compose, Docker Engine/Daemon/Server, and Docker Image.

Docker is called an ecosystem because it’s not just a single tool—it’s a set of components and services that work together to build, run, and manage containers. Let’s break down the main components:

⚙️Docker Daemon (Docker Engine)

  • Runs on the host operating system.

  • Responsible for creating, running, and managing containers.

  • Can communicate with other Docker daemons for distributed container management.

Diagram showing a stack with four layers, labeled from top to bottom: Container (green), Docker Engine (peach), Host OS (blue), and Hardware (gray). An arrow points to the Docker Engine layer.

💻Docker Client (CLI)

  • The interface for users to interact with Docker.

  • Uses commands and REST API to communicate with the Docker Daemon.

  • When you run a command in the terminal, the client sends it to the daemon for execution.

  • A single Docker client can communicate with multiple Docker Engines.

  • Analogy: Think of the Docker client as a bank clerk and the Docker daemon as the bank manager. When you request a loan (run a command), the clerk forwards it to the manager for approval.

🖥️ Docker Host

Diagram illustrating Docker architecture with four layers: Container, Docker Engine, Host OS, and Hardware. An arrow labeled "Docker Host" points to the layers.

  • The physical or virtual machine where Docker is installed.

  • Contains the Docker Engine, images, containers, networks, and storage.

  • Provides the environment to execute and run applications.

🌐 Docker Hub (Registry)

  • A registry that stores Docker images.

  • Two types of registries:

    • Public Registry: Docker Hub (accessible to everyone).

    • Private Registry: Used within an organization, similar to a private GitHub repository.

🖼️Docker Images

  • Read-only templates used to create containers.

  • Contain all dependencies and configurations required to run an application.

  • Ways to create images:

    • Pull from Docker Hub.

    • Build from a Dockerfile.

    • Commit from an existing container.

📦 Docker Containers

  • A running instance of an image.

  • Holds everything needed to run an application.

  • Image = Template, Container = Running copy of that template.

  • When an image runs on Docker Engine, it becomes a container.

🧠 Final Thoughts

Docker revolutionizes application deployment with lightweight, portable containers that ensure consistency and speed. It’s not a replacement for VMs but a smarter choice for modern DevOps and cloud-native workflows. Build once, run anywhere—that’s the power of Docker.

---

- Written by Pankaj Roy | DevOps & Cloud Enthusiast

0
Subscribe to my newsletter

Read articles from Pankaj Roy directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Pankaj  Roy
Pankaj Roy