An Introductory Guide To Docker & Containers
Hello everyone, in this blog I am going to give you all an overview of Docker, Containers and its applications. But before jumping right onto the topic, it is important to know some concepts like virtualization.
Virtualization
Suppose a software company has bought 5 servers of 100 GB RAM & 100 Core for deployment and other use cases. But they are just using 50% of those resources. The rest is like going to waste because they are underutilized and thus its inefficient. In order to overcome this problem we use the concept of virtualization.
What does virtualization do?
Consider we bought a server. Now we install a hypervisor on it [Hypervisor is a tool that allows us to install multiple virtual machines (VM) on our physical server or baremetal]. With this we can do a logical partition on our machine and can have multiple VMs.
What is a virtual machine?
Virtual Machine allows you to run multiple operating systems on a single physical machine, known as the host machine. Each virtual machine operates as an independent entity with its own virtualized hardware components, such as CPU, memory, storage, and network interfaces.
With help of VMs you can effectively use the server, with one team working on VM1, second team working on VM2 and so on...Overall the virtual machines provide a flexible and efficient way to manage and utilize computing resources in a variety of scenarios.
Containers
So now what are containers and why were they created? A container is like a lightweight, standalone package that contains everything your software needs to run: the code, runtime, system tools, libraries, and settings. Its a bit like a virtual machine but much more efficient.
So you must be wondering if they are so similar, then what is the exact difference between them and why they were created?
Virtual Machines vs Containers
VIRTUAL MACHINES | CONTAINERS |
VMs have individual guest operating system, thus VM1 cannot have access resources of VM2 | In containers, they share the same host operating system as that of the host OS kernel, thus making them lightweight |
VMs are quite secure than containers because there is complete isolation between VMs and exchange of information cannot happen | Containers are not very isolated. They can share information and resources with each other |
They are slightly slower, and although portable, some adjustments are required in the environment due to varying OS | They are faster, portable and have a better performance than virtual machines |
They are used in Legacy applications, varied OS requirements, and scenarios requiring stronger isolation | They are usually used in Microservices, DevOps, Continuous Integration, and Deployment |
Why to use Containers?
Lets imagine this with a real life example. Suppose you are leaving your current house and shifting somewhere else. You need to pack your clothes,utensils gadgets,books etc. There are 2 things you can do:-
1] Just Put Everything in boxes and bags and leave. Later in your new house unpack everything. Imagine how difficult it will be to arrange your stuff because everything is jumbled and messed up in boxes and bags.
2] Make dedicated boxes with labels on it. And put similar objects in one box and then later unpack them at your new house. How easy this will be now! You know what items are in which box and you can arrange everything quickly.
Now lets relating this with software development.
1] Each time you want to run your application on a different computer or server, you need to set up the environment, install dependencies, and configure everything. Just like arranging your furniture and unpacking your belongings in a new house
2] You package your entire application, along with all the necessary software and settings, into a container. It's like packing your entire room into a labelled box. When you want to run your application on a different computer or server, you just "unpack" the container, and everything is ready to go. It works the same way everywhere because it's all neatly organized in the container.
In simple terms, containers are like neatly organized, labeled boxes for your software. They make it easy to move your applications around, ensuring they work the same way no matter where you run them. It's like having a portable, self-contained room for your software that you can set up anywhere without the hassle of redoing everything from scratch.
Docker
Now what is docker? In simple words, it is a platform that implements containerization. Docker is like a magic box for software. It helps developers package up their applications, along with all the things those applications need to run, into a single container. This container can then be easily moved around and run on different machines without any worries about whether it will work or not.
Architecture of Docker
Docker Client
Docker client is a command line interface (CLI) which provides a way for the users to interact with the Docker.Here we can execute commands which are received by the docker daemon. When these commands are executed, they create docker containers and docker images, and also push our images into docker registry.
Docker Daemon
It is like the heart of Docker. If this stops functioning, the container will collapse.The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
Docker Registry
Docker Registry is a place where we store docker Images. It stores and distributes docker Images, allowing users to share and reuse them. Docker Hub is a public registry, but organizations can also set up private registries for more control over image distribution. We can pull any image from the registry and can also push our image on the registry, so that other users can use it.
Dockerfile
Dockerfile is also known as the "recipe" of our container. A Dockerfile is a file that contains a set of instructions to be followed to create a docker image. Each instruction in a Dockerfile represents a layer in the image, and these layers are cached to optimize the build process.
Docker Image
Image is what you build out of your Dockerfile. Image can be easily stored and shared. So if you want someone to try out your application you can just send them this docker image and they would be able to run the application exact same way without setting up anything on their local machines.
To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
Docker CLI Architecture
a) Server: It is the docker daemon called dockerd. It can create and manage docker images, i.e containers, networks.
b) Rest API: It is used to instruct docker daemon what to do.
c) Command Line Interface (CLI): It is a client that is used to enter docker commands.
Basic Docker Commands
Now lets take a look at some of the basic docker commands that one must know:
$ docker -v
# Tells you the version of docker
$ docker images
# We can see all the images present
$ docker ps
# Tells us all the RUNNING container process
$ docker ps -a
# Tells us all the running & stopped container process
$ docker inspect <container id>/<container name>
# Gives us information about that container
$ docker inspect <image id>/<image name>
# Gives us information about that image
$ docker network ls
# Gives us all the Docker networks present on a Docker host
$ docker pull <image name>
# Pulls the image on your system from Docker Hub
$ docker run <image name>
# Run the image (If not present on system then it will pull and run)
$ docker run -it <image name>
# Run the image in interactive mode
$ docker run -d <image name>
# Run the image in detached mode (in background)
$ docker start <container name>/<container id>
# Start the container
$ docker stop <container name>/<container id>
# Stop the container
$ docker rmi <image name>/<image id> -f
# Remove the image (-f flag means forced. If container is running you cannot remove the image)
$ docker rm <container name>/<container id>
# Remove the container
$ docker logs <container id>
# Get the logs of the running container
$ docker history <image name>/<image id>
# Tells us about all the history of that image
$ docker exec -it <container name> bash
# Execute and access bash inside that particular container
$ docker login
# Log in to a container registry server. If no server is specified thenb default is used
$ docker logout
# Logs out to a container registry server
$ docker run -d --name <new name> <old name>
# We can change the name of the container and create a new one
Dockerfile
A Dockerfile is a script used to create a Docker images. It is a simple text file that consists of instructions that we use to build Docker images. Let us understand this with an example.
FROM ubuntu:latest
MAINTAINER aditya123@email.com
WORKDIR /app
COPY . /app
RUN apt-get update && apt-get install -y python3 python3-pip
EXPOSE 3000
ENV NAME World
CMD ["python3", "app.py"]
Lets understand the terminologies
FROM : Specifies the base image that is used to build the Dockerfile. Every Dockerfile must start with this instruction
MAINTAINER : It specifies the author name and contact details (optional to use this command)
WORKDIR : Specifies the working directory from where all the instructions and commands will run
COPY : It is used to copy files into the container during build process
ADD : It is similar to COPY, but it has some additional features like supporting URLs, extracting files etc.
RUN : It executes commands in a new layer on top of the current image and commits the results
EXPOSE : It defines on which port our application will run
ENV : It is used to set environment variables that will be available to the processes running inside the Docker container
CMD : Provides default command and/or parameters for an executing a container
Now once the Dockerfile is done writing, we then build it by using the following command
$ docker build -t <desired name>:<desired tag> .
Here -t flag helps us to tag the file with our desired name and the . indicates that it will be built in the current directory. After the Dockerfile build is completed we can see our image by using docker images command.
Now we can also push this image onto the Docker hub so that other users can also access our image. For this we need to first create an account on Docker Hub. After this use the following command
docker push <your username>/<image name>
Now your image is pushed onto the Docker Hub and tadaaa!!! you have just created your first image using Dockerfile instructions!
Uses of Docker
1] Continuous Integration/Continuous Deployment (CI/CD): Docker containers provide a consistent environment for building, testing, and deploying applications, which is essential for CI/CD pipelines
2] Container Orchestration: Docker Swarm and Kubernetes are popular tools for orchestrating and managing containerized applications, enabling automated deployment, scaling, and management of container clusters
3] Testing and QA: Docker simplifies the process of setting up and managing test environments, allowing QA teams to quickly spin up isolated environments for testing applications.
4] Development Environments: Developers can use Docker to create isolated development environments that closely mimic production environments, reducing the "it works on my machine" problem.
And many more.........
Conclusion
Thus in this blog we have gone through the basics of Docker and how to use it. I hope this blog helps you to get started with Docker. There are a lot of advance topics which are out of scope for this blog. If you want to learn more about Docker in detail you can refer the following books:
Docker Deep Dive Zero to Docker in a single book ( by Nigel Poulton )
Docker-in-Practice ( by Ian Miell & Aidan Hobson Sayers )
Docker Up and Running ( by Karl Matthias & Sean Kane )
Credits
A huge Thank You to Kunal Kushwaha and Abhishek Veeramalla for continuously motivating me and sharing their valuable knowledge.
Please comment your views below and provide your suggestions if you have any
Thank you and Tech Care !!
Subscribe to my newsletter
Read articles from Aditya Pradhan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Aditya Pradhan
Aditya Pradhan
Trying to get my hands on DevOps