Understanding Containers and Docker: A Comprehensive Guide for Beginners
Table of contents
In today's tech-focused world, we need software that can grow, move, and adapt easily. Containers and Docker, a well-known container platform, are key tools that help make this happen by simplifying how we build and launch applications in different settings. In this blog, we'll look at the basics of containers and Docker, its architecture, installing docker and some basic commands. This blog is for complete beginners and can help as a starting point. Advanced topics are not covered.
What Are Containers?
Containers are lightweight, self-contained packages that include everything needed to run software. Unlike traditional virtual machines (VMs), which need a full operating system for each instance and all necessary services and applications, containers simplify this process. Managing environments with thousands of servers and installing required software and services can be exhausting . Containers solve this by bundling all required services and applications into a deployable form. They use the host system’s OS kernel, allowing them to run like any other software in a separate environment. Because containers run in isolation, you can run different versions of the same application in separate containers on the same server. To run an application, you set up the needed environment in a container, share it, and simply run it on the required servers. This is very efficient, as containers share resources with the host OS, are easy to share, use less memory, and are therefore fast.
Key Features of Containers:
Isolation: Containers package the application and its dependencies, preventing conflicts and ensuring consistency across different environments.
Efficiency: Containers use the host OS kernel, eliminating the need for separate operating systems for each application.
Portability: Containerized applications can be deployed on various platforms without changes, as containers run consistently across diverse environments.
Why Use Containers?
Containers solve several common software deployment issues:
Dependency Conflicts: Traditional deployment methods can lead to version conflicts if an application’s dependencies differ across environments. Containers package these dependencies, ensuring compatibility wherever the application is deployed.
Environment Parity: A containerized application behaves the same way on a developer’s laptop as on a production server, reducing the “works on my machine” problem.
Introducing Docker: The Containerization Platform
Docker is the most popular platform for making, deploying, and managing containers. Started in 2013, Docker has made containerization widely used and easy for developers and companies of all sizes. Docker offers tools to create, run, and manage containers, and its ecosystem includes Docker Hub (a public place for Docker images) and Docker Swarm (a tool for organizing containers).
Key Docker Concepts
Docker Images: A Docker image is a lightweight, stand-alone, and executable software package that includes everything needed to run a particular program or service. It’s a snapshot of an application and its dependencies. Once created, images can be stored in a repository and reused to create multiple containers.
Docker Containers: When you run a Docker image, it becomes a container – an instance of the image running as a lightweight, isolated environment. Multiple containers can be created from the same image, and each will operate independently.
Dockerfile: This is a text file containing instructions for building a Docker image. The Dockerfile specifies the OS, application code, dependencies, and configurations.
Docker Hub: A cloud-based repository where users can share and store Docker images. It has thousands of images for various software packages, which can be reused by developers.
Docker Compose: A tool that allows you to define and manage multi-container applications. It’s ideal for complex applications requiring multiple interconnected services, like databases, web servers, and application services.
Benefits of Using Docker
Consistency Across Environments: Docker ensures that an application behaves the same way across development, testing, and production environments.
Reduced Overhead: By using containers rather than full VMs, Docker significantly reduces system resource usage.
Accelerated Deployment: Docker containers start almost instantly, making it easy to scale applications quickly.
Enhanced Collaboration: Docker images can be easily shared, allowing teams to collaborate seamlessly by using the same environment configurations.
Docker vs. Virtual Machines: What’s the Difference?
Feature | Containers (Docker) | Virtual Machines |
Isolation | Process-level | OS-level |
OS Dependency | Shares host OS kernel | Full OS for each VM |
Resource Efficiency | Higher (uses less memory and CPU) | Lower (requires more resources) |
Startup Speed | Fast (seconds) | Slow (minutes) |
Image Size | Small | Large (full OS included) |
How Does Docker Work?
Docker leverages the OS kernel’s features to isolate applications from each other. Here’s a high-level overview of Docker’s workflow:
Build a Docker Image: Using a Dockerfile, developers define the dependencies, configurations, and code for an application. Docker then compiles this into an image.
Push the Image to a Repository: The image can be stored locally or pushed to Docker Hub (or a private repository), making it accessible for deployment on other machines.
Run Containers: From an image, you can create one or more containers, each running independently. Containers can be launched, stopped, and scaled across environments, such as testing, staging, and production.
Docker Architecture
Docker has revolutionized the way we deploy and manage applications by leveraging container technology. Understanding Docker's architecture helps grasp how Docker efficiently creates, manages, and deploys containers. Below is a breakdown of Docker's core components and how they interact.
1. Core Components of Docker Architecture
Docker's architecture revolves around several key components:
1.1 Docker Client
The Docker Client (
docker
) is the primary user interface for Docker.It is a command-line tool that allows users to interact with the Docker daemon.
Commands like
docker run
,docker build
, anddocker pull
are issued via the Docker Client.The client sends these commands to the Docker daemon through a REST API, which processes and executes them.
1.2 Docker Daemon (dockerd
)
The Docker Daemon (
dockerd
) is the background service running on the host machine that handles all Docker operations.It listens for requests from the Docker Client and manages Docker objects like containers, images, networks, and volumes.
The Docker Daemon communicates with other Docker daemons to manage Docker containers across a cluster.
1.3 Docker Images
Docker images are the read-only templates used to create Docker containers.
An image contains everything needed to run an application: code, dependencies, libraries, environment variables, and configuration files.
Images are built from Dockerfiles, which are simple scripts defining the image’s contents and configuration.
Each image consists of a series of layers, where each layer represents a change (like adding a file or installing software).
1.4 Docker Containers
Containers are lightweight, executable instances of Docker images.
A container includes the application and all its dependencies, isolated from the rest of the system, but shares the OS kernel.
Containers are portable and consistent, running the same way across different environments.
They are ephemeral by design—containers can be started, stopped, deleted, or rebuilt without affecting the host system.
1.5 Docker Registry
A Docker Registry is a repository for Docker images.
The default Docker Registry is Docker Hub, a public registry provided by Docker Inc.
Registries can be public or private, allowing organizations to store and manage their images securely.
Users can pull images from a registry to run containers or push images to share them with others.
2. How Docker Works: Key Concepts
The core components interact seamlessly to allow Docker to create, manage, and run containers. Here's a detailed look at the Docker workflow:
2.1 Dockerfile and Image Creation
A Dockerfile is a script that defines a series of instructions to create a Docker image.
Docker reads the Dockerfile to build the image layer by layer, starting from a base image.
Each command in the Dockerfile adds a new layer to the image, forming a layered file system.
Images are cached locally for faster reuse—Docker only updates layers that have changed, making builds efficient.
2.2 Container Lifecycle Management
Docker uses images to create containers. Containers have a defined lifecycle:
Create: A container is created from an image.
Start: The container is started, running the application.
Stop: The container is paused/stopped.
Remove: The container is deleted when no longer needed.
Containers are isolated environments that share the host’s kernel but have their own filesystem, network, and process space.
2.3 Networking in Docker
Docker provides several networking options to connect containers:
Bridge Network: The default network type. Containers on the same bridge network can communicate with each other.
Host Network: Containers share the host’s network stack directly.
Overlay Network: Used in Docker Swarm or Kubernetes to connect containers across multiple hosts.
None Network: No network interface is provided to the container.
Docker assigns each container a unique IP address, and users can also define custom networks for complex setups.
2.4 Docker Volumes and Storage
Docker containers are ephemeral; by default, data is lost when a container is deleted.
Volumes provide a way to persist data outside of the container’s lifecycle.
Volumes can be managed by Docker or mounted from the host system.
They are independent of containers, allowing data sharing between multiple containers.
Getting Started with Docker
First and foremost we must install docker. Docker can be installed in windows, macOS and in various distros of linux. In mac and windows a simple way to use docker will be to install Docker toolbox. This has a complete package of Docker CLI(command line interface for docker), Docker desktop(GUI for docker), and a Oracle VM. Unlike docker, docker toolbox runs in a VM. Click here to install Docker Toolbox.
How to Install Docker on Windows
To install docker on windows you have 2 options
Docker desktop: Install docker desktop on windows 10 or above. Download the installer .exe file and run it. Click here to install Docker Desktop.
Before installing Docker Desktop check out the prerequisites.
Enable WSL: Docker run faster using WSL(Windows subsystem for Linux). This helps in setting up a linux environment in windows. Docker utilizes the OS kernel of the machine to run. Therefore Docker images which uses windows kernel cannot run on linux and mac and vice versa. So in order to run both type of containers in your windows OS use WSL.
Step 1: Open PowerShell as Administrator and run:
wsl --install
Step 2: Set WSL 2 as the default version:
wsl --set-default-version 2
If you don’t have WSL 2, you can also enable it manually by following the instructions here.
After installation, launch Docker Desktop from the Start menu or the desktop shortcut.
If you’ve enabled WSL 2, Docker will prompt you to use it as the default backend. You can choose to switch now or keep using the default Hyper-V backend.
How to install Docker on macOS
1. Download Docker Desktop for Mac
Visit the Docker Desktop download page and download Docker Desktop for Mac installer.
Click "Download Docker Desktop for Mac" to get the
Docker.dmg
file.Detailed steps are in the download page.
2. Initial Setup
When you first launch Docker Desktop, it may ask for administrative privileges to install its components. Enter your system password if prompted.
Docker will run a quick setup, and the Docker icon will appear in the top menu bar.
Wait for Docker to complete the installation process. You'll see a whale icon in the menu bar, indicating that Docker is running.
How to Install Docker on Linux
Docker installation on Linux can vary slightly depending on the Linux distribution you're using, but the overall process is quite similar. So I will give general set of steps that apply to most Linux distributions. For more detailed steps for any distro visit Docker’s official documentation.
Update Your System:
Make sure your Linux system is updated with the latest package versions.
Look for system updates or upgrades in your package manager and apply them.
Install Prerequisites:
You’ll need to install a few basic packages that allow your package manager to connect to online repositories securely (like
curl
orwget
).Locate and install packages required for managing HTTPS-based repositories using your package manager (e.g.,
apt
,yum
,dnf
,pacman
).
Add Docker’s Official GPG Key:
Import Docker's security key to ensure that the Docker packages are verified before installation.
Use your package manager’s tools to import a public GPG key from Docker’s official website.
Add Docker Repository:
Add Docker’s official repository to your system’s package manager configuration to ensure you get the latest Docker versions.
Find the correct Docker repository URL for your distribution from Docker’s website and add it to your package manager's configuration files.
Update the Package List:
- Refresh your package manager’s package list so that it recognizes the Docker packages available in the newly added repository.
Install Docker:
- Use your package manager to install Docker Engine and its components (Docker CLI, Docker Compose, etc.).
Start the Docker Service:
- Enable and start the Docker service so that it runs automatically each time your system boots.
Verify the Installation:
Run a simple Docker command (like checking the version) to make sure Docker is installed correctly.
Optionally, test by pulling and running a basic container (like
hello-world
) to confirm Docker is working.
Hope you have installed docker. Now the next step is working with it.
Let’s begin by running the most basic command in the programming world “hello world”. Unlike other environments here we are pulling a hello-world docker image from docker hub.
Open your terminal and give the following command
docker run hello-world
If docker doesn’t have sudo permissions either add docker to sudo group or use sudo command.
Press enter and you will have a series of lines as output
docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:57ca85dc58caabe6419a7b5f249b61f5b6c62e4f78e5ad878db5f00e2907e364
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Here's a step-by-step breakdown of what happened
Unable to find image 'hello-world
' locally: Docker checks if the
hello-world
image is on your system. If it's not, Docker goes online to get it.latest: Pulling from library/hello-world: Docker starts downloading the image from Docker Hub.
2db29710123e: Pull complete: Docker downloads the image layer successfully. For
hello-world
, there's usually only one small layer.Digest: sha256:...: This is a checksum to confirm the image is correct and unchanged.
Status: Downloaded newer image for hello-world
: The image download is done, and Docker is ready to use it. If you had the image already, this step would be skipped.
Hello from Docker!: The container runs and shows a message to confirm that Docker is working properly. It also gives a summary of the steps Docker took.
Now I’ll tell some interesting facts
Docker pulls images from Docker Hub if the image is not available locally.
Docker Hub is Docker's repository.
Each Docker image consists of different layers. Before pulling each layer docker checks the local system if a copy of that image exists or not. It pulls only if the layer is unavailable. Try pulling different versions of the same image to see this.
Visit docker hub to explore various images
Some basic Docker commands
docker pull <image _name>
docker pull mongo
This pulls the image specified from docker hub. Here it pulls the image for mongodb
docker images
docker images
This lists out all the locally available images
Remove image
docker rmi <image_name>
This removes the image from the local environment
Run a container
docker run <image_name>
This runs the image and spins up a container. If the image isn’t available locally it pulls from docker hub. Therefore it also does the function of docker pull.
Running container in detached mode
docker run -d <image_name>
This runs the container in the background and you can keep using the terminal for other purposes.
Each container is associated with a container_id, we’ll use it in the coming commands.
List containers
docker ps
This lists all running containers. You’ll get the container_id here.
List all containers
docker ps -a
This lists all containers including stopped ones.
Start a container
docker start <container_id>
Stop a container
docker stop <container_id>
SSH into container
docker exec -it <container_id> /bin/bash
This takes you into the container and you can run bash commands in it. If you want to run any other type of commands modify it accordingly.
You can return control to your system using exit command.
Checking container logs
docker logs <container_id>
You can check what is happening inside the container, its status etc by this command. It gives you the complete logs of the container.
These are some basic commands to get you started off with docker. There are more advanced commands like networking commands and using docker compose, volumes etc, but we will get into that in my coming blogs
Docker in Production
While Docker is great for development, it also shines in production. Paired with orchestration tools like Kubernetes or Docker Swarm, Docker containers enable automated scaling, load balancing, and fault tolerance. Docker’s flexibility means you can scale your application with ease and confidence, knowing that each instance will be consistent and reliable.
Conclusion
Containers and Docker have transformed how software is developed, tested, and deployed. By providing lightweight, consistent, and isolated environments, Docker empowers developers to build scalable, flexible applications that are platform-independent. Whether you’re a developer, DevOps engineer, or system administrator, understanding Docker is essential to modern software development. Give it a try, and experience the difference Docker can make in your development workflow. We’ll learn more about Docker in my coming blogs. Stay tuned!!!
Let's Connect!
LinkedIn:www.linkedin.com/in/midhun-manoj-60384a190
Email Me:midhunmanoj6@gmail.com
Instagram:midhunmaniac100
Subscribe to my newsletter
Read articles from midhun manoj directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by