Understanding Containerization and Virtualisation: A Guide to Docker

JAY PATILJAY PATIL
3 min read

History for more context

In the early days of IT infrastructure, Companies ran one OS and one APP per server, which is

  • Not that efficient

  • Hardware was not used properly (CPU, RAM, and Storage may be idle or not fully utilised)

  • Cost is high due to OS overhead

To solve all these problems, a software called Hypervisor was built.

Hypervisor

A hypervisor is a piece of software (or firmware) that allows multiple virtual machines (VMs) to run on a single physical machine (host). Each VM has its own operating system and is isolated from others. The hypervisor manages system resources (CPU, memory, storage) and distributes them across the VMs.

Analogy:

Think of a hypervisor like a big building which has multiple apartments—each apartment has its own kitchen, bathroom, etc. It’s isolated and self-contained (like a VM)

Hypervisor is a great management software, but it has some cons

  • It is heavyweight

  • Bulky because of the OS we need with it

  • The cost is high

Here comes the Docker Engine, which helps to virtualise systems more efficiently.

Virtual Machines v/s Containers

Virtual Machines provide full hardware-level virtualisation with stronger isolation, making them ideal for legacy systems and multi-OS environments. However, they are resource-heavy and slower.

Containers, like those managed by Docker, offer lightweight, OS-level virtualisation. They’re fast, efficient, portable, and ideal for microservices and cloud-native apps. In modern development, containers are the preferred choice for scalability and automation, while VMs are still valuable in specific use cases.

Analogy:

Think of a VM like renting separate houses—each with its own foundation (OS), kitchen, and electricity. More secure, but expensive and slow to build.

Containers are like apartments in the same building—they share plumbing (OS), but each unit (app) has its own space. They're cheaper, faster, and easier to manage.

What is Docker and Containers?

Docker is an open-source platform used to build, ship, and run applications in containers.

Containers are lightweight, portable, and self-sufficient environments that include everything an application needs to run—code, runtime, libraries, and dependencies.

Docker enables developers to package applications and run them consistently across different environments, such as development, testing, and production.

Example:

Imagine you're developing a Node.js app. With Docker, you can define everything in a Dockerfile, and run it in a container. Whether it’s your laptop, staging server, or production cloud server, the app behaves exactly the same.

Analogy:

Think of a virtual machine like a house with its own kitchen and utilities—self-contained, but heavy and resource-intensive.

Docker containers are like roommates sharing a house. They each have their own bedrooms (apps), but share the kitchen (OS). This setup is much more efficient.

Common Docker Components

  • Dockerfile: A Script that defines how to build a container image.

  • Image: Read-only template used to create containers (e.g., a blueprint).

  • Container: A running instance of an image.

  • Docker Hub: Public repository to share images.

  • Docker Compose: Tool to define and run multi-container Docker apps.

Part 2: Docker setup, commands, layers and more soon…

6
Subscribe to my newsletter

Read articles from JAY PATIL directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

JAY PATIL
JAY PATIL

Back-end developer with strong full-stack skills, flexible and eager to build scalable web solutions. I aim to drive innovation, excel in team environments, and continuously improve my technical abilities by working on exciting products.