Docker - Diving deep inside a container


So you’d be living under a rock if you have never heard of “Docker” and “containerization”. This blog aims to cover the inside of the hood components of a Docker container. This is a little more advanced than the Docker 101 getting started for beginners, this blog expects that you have the beginner knowledge of Docker and container although I will try my best to explain in the most simplest way possible.
Virtualization:
So, let’s get started with this,
To understand Docker we need to understand virtualization and why is it important. Well we often need to have environments that support only a specific number of dependencies, libraries and only required amount of resources(like CPU, RAM, etc.) to develop applications, microservices, infrastructure and many more such use cases. So instead of installing everything directly on our local machine — which can get messy and resource-heavy, we package the application and its environment into a neat, isolated unit called a container. This makes it easy to run, test, and deploy applications consistently, anywhere. This is Virtualization!
Virtualization refers to the concept of creating and running virtual instances of applications and their environments, often referred to as containers, on a single physical or virtual machine.
Traversing the Stack: Starting from the Bottom
At the foundation of virtualization, we encounter hypervisors — the core technology that makes running virtual machines possible.
A hypervisor is a software that you can use to run multiple virtual machines on a single physical machine.
There are mainly two types of hypervisors:
Type 1 Hypervisors(Bare Metal) :
These hypervisors run directly on the host’s hardware without the need for an underlying operating system like Windows or macOS. They manage hardware resources such as CPU, RAM, and storage, and allocate them to virtual machines (VMs) directly.
Examples: VMware ESXi, Hyper-V (in bare-metal mode), KVM (Kernel-based VM on Linux).
Type 2 Hypervisors(Hosted) :
These hypervisors run on top of an existing operating system (like Windows, macOS, or Linux). They behave like regular software applications and are easier to install and use, especially for desktop virtualization. However, they are generally slower than Type 1 hypervisors due to the extra OS layer.
Examples: VirtualBox or VMware Workstation.
With hypervisors in place, we can build Virtual Machine Images(VMI) — think of them as blueprints for full operating systems. These images are then used to launch actual Virtual Machines(VMs), ready to run in isolated environments.
Virtual Machine(VM):
A Virtual Machine (VM) is essentially made up of:
An Operating System —This leads us to with its own allocated resources like RAM, CPU, storage, and more. It runs separately from your host system.
Libraries and Dependencies — everything your application needs to function properly inside the VM.
Binaries — the compiled files of your program that let you run your software anywhere the VM is supported.
This brings us to
Containers:
🚢 What Exactly is a Container?
Imagine you have an app you want to run, but you don’t want to worry about setting up the perfect environment every time — installing the right libraries, the right versions, and making sure it works on every machine.
That’s where containers come in.
A container in Linux is a lightweight, standalone environment that packages everything your app needs — binaries, libraries, dependencies — into a neat, isolated unit. While it runs separately from the rest of the system, it still shares the same Linux kernel underneath. This makes containers fast, efficient, and super portable.
Here’s why containers are awesome:
You can run multiple apps with different dependencies on the same system — no conflicts.
Each container includes only what’s needed — no extra baggage.
It starts up in seconds and uses fewer resources compared to traditional methods.
Tools like Docker make working with containers a breeze.
🥊 Containers vs Virtual Machines (VMs): What’s the Difference?
You might be wondering — how are containers different from virtual machines? Good question.
Let’s break it down:
Feature | Containers | Virtual Machines (VMs) |
Isolation | Process-level (lighter) | Full hardware-level |
OS Overhead | Share the host OS kernel | Each has its own full OS |
Startup Time | Seconds | Minutes |
Resource Usage | Minimal | Heavy — includes full OS |
Image Size | Just the app & dependencies | Full OS + app stack |
Best For | Microservices, CI/CD, scalability | Full OS environments, legacy apps |
Running a VM is like setting up a whole computer for each app — OS, services, everything. It’s powerful, but bulky.
Containers are more like mini-isolated apps. They bring just what they need and skip the rest, which means less overhead, faster startup, and smoother performance — especially in modern cloud-native development.
What Happens Inside the Container…
Let’s take a closer look inside a container:
A container holds the binaries and libraries needed to run your application — but how does it stay isolated from everything else on the system?
That’s where namespaces and cgroups step in.
Namespaces allow the kernel to take system resources like CPU, memory, storage, files, processes, and network interfaces — and wrap them in a separate environment. This means a container gets its own private view of the system, with no visibility into what's outside.
cgroups (control groups) help by putting limits on how much CPU and memory a container can use. This ensures that no single container hogs all the resources — you only use what’s allowed, keeping things efficient and under control.
You can think of namespaces like giving each container its own little world:
"Whatever happens in the namespace, stays in the namespace."
Some of the key Linux namespaces include:
PID (process IDs)
cgroup (resource limits and accounting)
user (user and group IDs)
UTS (hostname and domain name)
IPC (inter-process communication)
net (network stack)
mnt (mount points / file systems)
This isolation applies to all namespaces inside a container — whether it's PID, mount, network or any other. And that’s what keeps containers so neatly separated from each other and the host — Even though they share the same underlying kernel.
Let’s understand it with a demo:
Prerequisites: Make sure Docker is installed on your machine, and you’re comfortable with the basics — like creating and running containers, navigating the Linux filesystem, and using basic Linux commands.
Start by listing out all the cgroups present currently on your system especially the “memory” cgroups:
ls /sys/fs/cgroup
ls /sys/fs/cgroup/memory.
Output:
- Now let us create a new cgroup
cd /sys/fs/cgroup
mkdir memory/chad
ls memory/chad
Just by creating a new directory, the system automatically initialized(basically cloned it) a new memory cgroup inside /sys/fs/cgroup
. Pretty cool, right?
To make managing control groups easier, install the
cgroup-tools
package:apt install -y cgroup-tools
Now, let’s snapshot the state of the memory cgroups before running a Docker container. Then we can write out the cgroup memory controller in a file called before.memory.
memory controller : a Linux kernel feature that monitors and limits memory usage —RAM
lscgroup memory:/ > before.memory
- We will use an nginx server container for this demo. Take a snapshot after we make a docker container and write out in after.memory file
docker container run --detach --name nginx nginx
lscgroup memory:/ > after.memory
Run
docker ps
and copy the container id of the container.docker ps
Now we will see what is the maximum memory allowed by the cgroup.
💡This command is only applicable on cgroup v2 systems
ls
cat system.slice/docker-<docker-container-id>.scope/memory.max
If it shows a number: That’s the limit in bytes.
If it shows
max
: That means no limit has been set (i.e., unlimited memory allowed).
If your system is using cgroup v1, then replace the command with:
cat system.slice/docker-<docker-container-id>.scope/memory.limit_in_bytes
If it shows a large number like:
9223372036854771712
That means no memory limit was set, and the container can use as much memory as the system allows.
Let’s now do what cgroups were made for — limiting memory!
For this we can run the previous command with a
-m
flag to limit the memory usage:docker run --detach -m 6MB --name nginx2 nginx
Now check out the new container that you created with the cgroup limiting its memory to 6MB
docker ps
# copy the new container-id
cat docker-<docker-container-id>.scope/memory.max
Expected Output:
6291456
And Voila 🎉 — there it is. That’s 6MB in bytes — exactly the memory limit we set with the -m
flag.
We can also hardcode the value of memory to be required while making any new container to docker with the help of the cgroups.
cd chad echo 6291456 > memory.max
This will make sure that every new process that was created inherits the memory.max value in bytes.
Now to add a specific process(with PID) into our custom cgroup
docker container top nginx2 # copy the PID of the container
echo 38437 > /sys/fs/cgroup/memory/mygroup/cgroup.procs # <PID> -> 38437 for me
This would mean: "move the process with PID 38437 into this memory cgroup."
Now we can check if the process was added to our custom memory cgroup
cat /proc/<PID>/cgroup | grep memory
Bonus Exploration: Play with Namespaces Using unshare
and uts
(optional):
Want to try isolating the hostname?
--uts
This tellsunshare
to create a new UTS namespace, which isolates:hostname
domain name
hostname # Check original hostname hostname experiment # Set a new one hostname # Output will now show: experiment
This doesn’t persist system-wide — it only changes the hostname within that namespace.
unshare
A Linux utility that lets you create a new namespace (or multiple) for a process.
Namespaces isolate things like processes, network, mount points, etc.
sudo unshare --pid --fork sh
This command spawns a new shell inside an isolated PID namespace.
What Happens:
You're dropped into a new shell inside a new PID namespace.
Run
ps
ortop
inside that shell — you’ll see that your shell is PID 1, just like init/systemd is in your normal environment.Any process you spawn inside this shell will have its own PID space — isolated from the host.
Use Case:
This is often used to:
Simulate containers or lightweight sandboxing.
Test init systems or daemons in isolated environments.
Understand how container runtimes like Docker isolate processes.
And that’s a wrap! 🎁
I hope this blog gave you a clearer look into what actually powers containers under the hood — from namespaces and cgroups to memory limits and PID isolation. These core Linux concepts are not just theory — they’re what make containers so powerful in the real world.
Thanks for reading! If you found this helpful, share it with your dev friends, drop a comment, or reach out — I’d love to hear your thoughts.
Until next time — stay curious, keep hacking, and may your containers always run light and fast. 🐳✨
Subscribe to my newsletter
Read articles from Yash Pal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Yash Pal
Yash Pal
Hey I am a budding developer who is passionate to learn tech and explore the domains Computer Science has to offer. I also like to contribute to Open Source, help others and also learn from seasoned engineers in the process.