Understanding Containers: From Docker to runc - A Complete Deep Dive


Understanding Containers: From Docker to runc
Containers have revolutionized how we build, ship, and run applications. But what exactly happens when you run docker run nginx
? Let's journey from the user-friendly Docker interface down to the bare-metal container runtime runc
, understanding every layer in between.
π― What Are Containers? (In Simple Terms)
Think of containers like shipping containers for your applications:
Traditional Way: Your app runs directly on a server (like carrying loose cargo)
Container Way: Your app runs inside a standardized "box" that includes everything it needs (like a shipping container with all contents packed)
Key Benefits:
Portability: Runs the same everywhere (your laptop, test server, production)
Isolation: One container can't mess with another
Efficiency: Lighter than virtual machines
Consistency: "It works on my machine" problems disappear
ποΈ Docker Architecture Deep Dive
Docker isn't just one program - it's a complete ecosystem with multiple components working together:
Docker Components Explained
1. Docker CLI (docker
command)
What you interact with
Sends commands to Docker daemon
Example:
docker run
,docker build
,docker ps
2. Docker Daemon (dockerd
)
The "server" that does the actual work
Runs as a background service
Manages everything: images, containers, networks, volumes
3. containerd
High-level container runtime
Manages container lifecycle
Talks to
runc
to actually create containers
4. runc
Low-level container runtime
Actually creates and runs containers
Implements OCI (Open Container Initiative) specification
π What Happens When You Run docker run nginx
?
Let's trace the journey step by step:
1. You type: docker run nginx
2. Docker CLI β Docker Daemon
"Hey dockerd, run nginx container"
3. Docker Daemon checks:
- Is nginx image available locally?
- If not, download from Docker Hub
4. Docker Daemon β containerd
"Create container from nginx image"
5. containerd β runc
"Start container with this configuration"
6. runc β Linux Kernel
"Create namespaces, cgroups, start process"
7. Your nginx container is running!
π Container Runtimes Comparison
Feature | Docker | Podman | nerdctl | containerd/ctr | runc |
Daemon-based | β Yes | β No | β Yes | β Yes | β No |
Learning Curve | Easy | Moderate | Easy | Hard | Very Hard |
Rootless Support | Limited | β Excellent | β Good | β οΈ Partial | β With config |
OCI Compliant | β Yes | β Yes | β Yes | β Yes | β Yes |
Best For | Development | Security-focused | containerd users | System admin | Learning/Custom |
Image Building | β Built-in | β Buildah | β Built-in | β External | β No |
Docker Compose | β Native | β podman-compose | β Yes | β No | β No |
π§ Daemon vs No-Daemon: What's the Difference?
Daemon-Based (Docker, containerd)
Pros:
Centralized management
Rich API for tools
Easy networking between containers
Cons:
Single point of failure
Requires root privileges
Daemon must be running
Daemonless (Podman, runc)
Pros:
No single point of failure
Better security (rootless)
Each container is independent
Cons:
No centralized management
Complex networking setup
Less tooling integration
π οΈ Hands-On: Understanding Containers with runc
Let's build and run a container using only runc
- the lowest level tool. This will show you exactly what Docker does behind the scenes!
Prerequisites
# Install runc (Ubuntu/Debian)
sudo apt update
sudo apt install runc
# Install runc (CentOS/RHEL)
sudo yum install runc
# Verify installation
runc --version
Step 1: Create a Root Filesystem
# Create workspace
mkdir -p ~/container-lab/nginx-container/rootfs
cd ~/container-lab/nginx-container
# Method 1: Export from Docker (easier)
docker export $(docker create nginx:alpine) | tar -C rootfs -xvf -
# Method 2: Build from scratch (educational)
# mkdir -p rootfs/{bin,etc,lib,usr,var,tmp,dev,proc,sys}
# Copy nginx binary and dependencies manually
Step 2: Generate Container Configuration
# Generate default OCI spec
runc spec
# This creates config.json - the container blueprint
Let's understand what's in config.json
:
{
"process": {
"terminal": true,
"user": {"uid": 0, "gid": 0},
"args": ["nginx", "-g", "daemon off;"],
"env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],
"cwd": "/"
},
"root": {
"path": "rootfs",
"readonly": false
},
"hostname": "nginx-container",
"mounts": [
{
"destination": "/proc",
"type": "proc",
"source": "proc"
}
],
"linux": {
"namespaces": [
{"type": "pid"},
{"type": "network"},
{"type": "ipc"},
{"type": "uts"},
{"type": "mount"}
]
}
}
Step 3: Customize Configuration
Edit config.json
to expose port 80:
# Edit config.json and add port mapping in annotations
# Or create a simple web server instead of nginx
For simplicity, let's create a basic web server:
# Create a simple Python web server
cat > rootfs/server.py << 'EOF'
#!/usr/bin/env python3
import http.server
import socketserver
PORT = 8080
Handler = http.server.SimpleHTTPRequestHandler
with socketserver.TCPServer(("", PORT), Handler) as httpd:
print(f"Server running at port {PORT}")
httpd.serve_forever()
EOF
chmod +x rootfs/server.py
# Update config.json process args
# "args": ["python3", "/server.py"]
Step 4: Run the Container
# Create unique container ID
CONTAINER_ID="my-web-server-$(date +%s)"
# Run container
sudo runc run $CONTAINER_ID
Step 5: Inspect Running Container
In another terminal:
# List running containers
sudo runc list
# Get container state
sudo runc state $CONTAINER_ID
# Check processes
ps aux | grep $CONTAINER_ID
Step 6: Set Up Networking (Advanced)
# Create network namespace
sudo ip netns add container-ns
# Create veth pair
sudo ip link add veth0 type veth peer name veth1
# Move veth1 to container
sudo ip link set veth1 netns container-ns
# Configure host side
sudo ip addr add 192.168.100.1/24 dev veth0
sudo ip link set veth0 up
# Configure container side
sudo ip netns exec container-ns ip addr add 192.168.100.2/24 dev veth1
sudo ip netns exec container-ns ip link set veth1 up
sudo ip netns exec container-ns ip link set lo up
# Enable routing
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -j MASQUERADE
Step 7: Clean Up
# Stop container
sudo runc kill $CONTAINER_ID SIGTERM
# Remove container
sudo runc delete $CONTAINER_ID
# Clean up networking
sudo ip netns delete container-ns
sudo ip link delete veth0
π§ Understanding Linux Container Primitives
When you run a container, several Linux kernel features work together:
1. Namespaces (Isolation)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HOST SYSTEM β
β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β Container 1 β β Container 2 β β Container 3 β β
β β β β β β β β
β β PID: 1-100 β β PID: 1-50 β β PID: 1-75 β β
β β /app files β β /web files β β /db files β β
β β eth0 IP β β eth0 IP β β eth0 IP β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Types of Namespaces:
PID: Each container sees its own process tree
NET: Each container has its own network stack
MNT: Each container has its own filesystem view
UTS: Each container can have its own hostname
IPC: Inter-process communication isolation
USER: User ID mapping
2. Cgroups (Resource Control)
System Resources
βββ CPU: 4 cores
β βββ Container 1: 50% (2 cores max)
β βββ Container 2: 30% (1.2 cores max)
β βββ Container 3: 20% (0.8 cores max)
βββ Memory: 16GB
β βββ Container 1: 8GB max
β βββ Container 2: 4GB max
β βββ Container 3: 4GB max
βββ Disk I/O
βββ Container 1: 100 MB/s max
βββ Container 2: 50 MB/s max
βββ Container 3: 25 MB/s max
3. Union File Systems
Container Image Layers:
βββββββββββββββββββββββββββββββββββ β Container Layer (Read/Write)
β Your app changes β
βββββββββββββββββββββββββββββββββββ€ β App Layer (Read-Only)
β nginx binary + config β
βββββββββββββββββββββββββββββββββββ€ β OS Package Layer (Read-Only)
β curl, wget, other tools β
βββββββββββββββββββββββββββββββββββ€ β Base OS Layer (Read-Only)
β Ubuntu 22.04 filesystem β
βββββββββββββββββββββββββββββββββββ
π Why Learn These Low-Level Tools?
1. Debugging Superpowers
When containers break, you'll know exactly where to look:
Network issues? Check namespaces
Resource problems? Examine cgroups
File system errors? Understand layers
2. Custom Solutions
Build your own:
Container runtimes
Orchestration tools
Security scanners
Development environments
3. Career Growth
Understanding the fundamentals makes you:
Better at troubleshooting
More confident in production
Able to optimize performance
Valuable for complex projects
π― Summary: The Container Hierarchy
Key Takeaways:
Docker is not containers - it's a user-friendly interface to container technology
Containers are just processes - with special isolation and resource controls
runc does the real work - creating namespaces and starting processes
Understanding the stack - helps you debug, optimize, and build better systems
π Next Steps
Beginner: Master Docker basics, try Podman
Intermediate: Experiment with containerd and ctr
Advanced: Build custom runtimes, contribute to projects
Expert: Develop your own container orchestration tools
π Additional Resources
Happy containerizing! π³
Subscribe to my newsletter
Read articles from Kaushal Kishore directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Kaushal Kishore
Kaushal Kishore
Currently working as a Network Operations Center (NOC) Engineer at Verizon Networks under HCLTech as a Graduate Engineer Trainee (GET), I specialize in monitoring and maintaining critical network infrastructures. While I ensure seamless network uptime and resolve incidents at the provider level, I am also deeply passionate about transitioning into the DevOps space. With hands-on exposure to CI/CD pipelines, Docker, GitHub Actions, Ansible, and other modern DevOps tools, I am consistently upskilling to bridge the gap between operations and development. My journey reflects a dynamic shift from traditional NOC responsibilities to automation-driven DevOps workflowsβcombining reliability, efficiency, and innovation.