Docker 101: Understanding Containers, Dockerfiles, Volumes, and More


In the world of modern software development and deployment, efficiency, speed, and scalability are crucial. Over the years, the way we deploy applications has evolved significantly. one of the most important step in this field shifting toward containerization. Before containerization we used virtualization, mean virtual machine to deploy the application. In this blog, we’ll walk through what virtualization is, how containerization improves on it, what Docker is, how it works under the hood, and some essential concepts and commands you’ll need to get started.
Virtualization
In a simple word virtualization is basically splitting the Physical resources into multiple logical resources this is know as virtualization in simple word. Virtualization uses a hypervisor, such as VMware or VirtualBox, to run multiple virtual machines (VMs) on a single physical machine. Each VM includes its own full-fledged operating system, necessary binaries, and libraries essentially behaving like a separate physical computer.
Downside of virtualization
Virtual machines are heavyweight, consume a lot of system resources, and boot-up times are relatively slow.
Managing updates, networking, and storage across multiple VMs adds another layer of complexity
To remove this problem containerization came into role its simply make deployment process very smooth. lets understand about it.
Containerization
A container is a lightweight, standalone, and executable package that includes everything needed to run an application code, libraries, dependencies, and configuration.
But shares the host OS kernel with other containers. Unlike VMs, containers don’t need a full operating system, making them faster, more efficient, and more portable. But its doesn’t mean container do not have any O.S , container does contain some system file which will help him to run efficiently.
The main benefits of containerization include:
Faster startup times
Lower resource usage
Better scalability
Improved portability across environments
So this is containerization, now lets understand what is docker then, and how docker play very curcial role in containerization.
What is Docker?
Docker is an open-source platform that simplifies the process of building, shipping, and running containers. It provides a consistent environment for development, testing, and deployment, ensuring that your application behaves the same regardless of where it runs like server, local system etc.
docker use containerization to abstract this ability from it. you have to focus on other work rather than complexity of environment. which is handle by docker,
How docker achieve all things. how docker easy manages this container, all this things happened with help of docker daemon , now lets understand about docker architecture in detail.
Docker Architecture
Docker follows a client-server architecture and is made up of several key components
Docker Client: The interface you interact with. When you run a command like
docker run
, it’s the client sending that request to the Docker daemon.Docker Daemon (dockerd): The background service that manages containers, images, networks, and volumes. It listens to Docker API requests from the client.
Docker Images: Read-only templates used to create containers. An image can be based on another image, adding custom layers to it.
Docker Containers: These are the running instances of Docker images — lightweight, isolated environments that share the host kernel.
Docker Hub/Registry: A cloud-based repository where Docker images are stored and shared. You can pull existing images or push your custom ones.
Basic Docker Commands
Here are some commonly used Docker commands to get you started:
docker --version # Check Docker version
docker pull nginx # Download an image from Docker Hub
docker run -d -p 80:80 nginx # Run a container in detached mode
docker ps # List running containers
docker stop <container_id> # Stop a running container
docker rm <container_id> # Remove a stopped container
docker images # List all images
Dockerfile
A Dockerfile is a text file that contains instructions for building a Docker image. It automates the creation of an image based on your application’s code and dependencies.
# Use base image
FROM node:18
# Set working directory
WORKDIR /app
# Copy files and install dependencies
COPY package*.json ./
RUN npm install
# Copy remaining files
COPY . .
# Expose the app port
EXPOSE 3000
# Start the application
CMD ["node", "server.js"]
:
Common Dockerfile Instructions:
FROM
: Specifies the base imageWORKDIR
: Sets the working directory inside the containerCOPY
: Copies files from the host to the containerRUN
: Executes commands inside the container (usually used for installing dependencies)EXPOSE
: Informs Docker which port the container will useCMD
: Specifies the command to run when the container starts
Docker Volumes
Containers are ephemeral by default meaning any data written inside a container is lost once the container is removed. Docker volumes solve this by providing persistent storage that lives outside the container lifecycle. You can mount a volume to your container, ensuring data isn’t lost between restarts or updates.
docker run -v mydata:/app/data myapp
This command creates a volume named mydata
and mounts it to the /app/data
directory inside the container.
Port Expose
By default, containers are isolated and not accessible from outside. To make your application accessible, you need to expose ports.
In Docker, the EXPOSE
instruction in the Dockerfile is informational. To actually publish a port, use the -p
flag:
docker run -p 8080:3000 myapp
This maps port 3000 inside the container to port 8080 on the host, making the application accessible at http://localhost:8080
.
Advantages of Docker
Lightweight: Containers share the host OS kernel, so they use fewer resources than virtual machines.
Fast startup: Containers start in seconds, which is great for rapid development and scaling.
Version control: You can version control Docker images and roll back easily if something breaks.
Isolation: Each container runs independently, making it easier to debug, update, or scale services.
Disadvantages of Docker
Security concerns: Containers share the host OS kernel, so they aren’t as isolated as VMs.
Limited OS support: Docker containers can only run Linux-based applications natively (Windows support is improving but still has limitations).
Storage management: Managing persistent data in containers (especially across multiple containers) can be complex.
Networking complexity: Advanced networking between containers and external systems can be tricky to configure.
if you want to explore project regarding docker then you can check my project link is here: docker-project-blog github-docker-project
Conclusion
Docker has transformed the way we build, ship, and run applications. By moving from heavy virtual machines to lightweight containers, developers can enjoy faster deployments, greater scalability, and more consistent environments. Understanding Docker’s architecture, commands, and core concepts like Dockerfiles, volumes, and ports will help you build and deploy applications with confidence in today’s DevOps-driven world.
Subscribe to my newsletter
Read articles from ANUJ TIWARI directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
