Docker - A Gentle Introduction

Mishal AlexanderMishal Alexander
10 min read

Problem Statement

One of the longstanding challenges in software development is maintaining a consistent development environment across different systems and team members. Even minor differences—such as using a slightly newer version of a library—can lead to bugs or unexpected behavior. Setting up development tools and libraries that work uniformly across different operating systems used to be a daunting task. Tech companies need to ensure that tools used by the team are compatible across Windows, macOS, and Linux.

This issue of inconsistent environments is precisely what Docker addresses and solves. Docker enables developers to create consistent, scalable, shareable, lightweight, and disposable environments using containers. Today, the terms "Docker" and "container" are often used interchangeably but it is right to do so? In this article, we’ll take a gentle, beginner-friendly look at Docker, understand how it works and learn to use it effectively.

Installing Docker Desktop

To use Docker on your local machine, you need the Docker CLI—a command-line tool you can access using the docker command (just like how Node.js can be accessed using node). While the CLI is essential, Docker Desktop provides a graphical user interface that makes managing containers easier.

To install Docker:

  1. Download Docker Desktop from the official Docker website.

  2. Follow the installation steps—usually a few simple clicks.

  3. Verify the installation by running the following command in your terminal: “docker -v”

Once verified, which means docker-cli is setup successfully in the system, you can open the docker desktop. It looks something like this:

Once installed, open Docker Desktop. You'll see four main sections:

  1. Containers: Displays running and stopped containers.

  2. Images: Shows container images available locally.

  3. Volumes: Manages data used by containers.

  4. Builds: Builds container images from source code.

Running First Docker Container

With our limited knowledge so far, let’s straight up run a docker container using a command :

“docker run -it ubuntu”

What does this command mean?

  • docker: Invokes the Docker CLI.

  • run: Tells Docker to create and start a container.

  • -it: Runs the container in interactive mode with a terminal session.

  • ubuntu: Specifies the image to use (Ubuntu, in this case).

If the Ubuntu image isn't found locally, Docker will pull it from Docker Hub - an online repository for container images. Once the image is pulled, Docker creates the container and gives you access to its terminal (e.g., you'll see a shell prompt like ‘root@container-id’ as seen in the screenshot below).

Understanding Images & Containers

So, how is an image different from a container?

Docker image is a read-only template that includes application code, dependencies, and environment settings.

Docker container is a running instance of an image.

Think of it like this: an image is a blueprint, and a container is the building constructed from that blueprint. Containers are isolated from each other—each one is a self-contained environment.

For example, if you create a file in one container, that file won’t be visible in another container unless explicitly shared.

Docker Daemon

The Docker Daemon (dockerd) is the core background service that manages Docker containers, images, volumes, and more. It listens for commands via the Docker API and processes them.

If you try to run Docker commands without the daemon running (e.g., by not launching Docker Desktop), you’ll get an error (see screenshot below). Just start Docker Desktop, and the daemon will be initialized automatically

Docker Hub

Docker Hub is like GitHub, but for container images. It provides a vast library of public and trusted images, including databases, programming languages, frameworks, and more.

You can:

  • Pull official images (e.g., node, mysql, nginx).

  • Create your own custom images and push them to Docker Hub.

  • Share private or public repositories with your team or the world.

Note: Always prefer images labeled as "Verified Publisher" or from the “Trusted Content” section to avoid potential security issues.

Essential Docker Commands

Let’s learn some essential docker commands:

  • docker container ls: Lists currently running containers.

  • docker container ls -a: Lists all containers (running and stopped)

  • docker start <container name>: Starts a stopped container.

  • docker stop <container name>: Stops a running container.

  • docker exec <container name> <sh command to run>: Executes a shell command inside a running container.

  • docker exec -it <container name> bash: Launches an interactive bash session inside the container. Press Ctrl + D to exit.

  • ‘docker images’ or ‘docker image ls’: Lists all local images.

There are more useful docker commands but let’s get introduced to them slowly.

Port Mapping

Suppose you’re running a Node.js server that listens on port 8000. Even if it works inside the container, it won’t be accessible from your browser unless you map the port(s) of the server correctly. This is referred to as - port mapping

.To expose the container’s internal port to your machine:

docker run -it -p 8000:8000 <image_name>

This maps port 8000 inside the container to port 8000 on your local machine.

Examples:

  • -p 9000:8000 → Requests to localhost:9000 go to port 8000 inside the container.

  • to map multiple ports, chain them one after the other like:

Docker Layer Caching

When you rebuild an image using the same Dockerfile, then we will notice that it runs faster than the first time. Why?

Because docker caches unchanged files and if nothing changes, it just reuses the already downloaded and pre-built files to save memory and time. We will discuss about this in another article in depth.

Containerizing A NodeJs Application

Let’s walk through containerizing a simple Express server.

  1. Create the NodeJS Application which we want to containerize
    Let’s create a simple express REST api server with NodeJS using the code below:

    [on an empty folder → run ‘npm init -y’ → create a file ‘main.js’ → copy the following code there]

     import express from "express"; // npm install express and import it
     const app = express(); // create an app with express library
     const PORT = process.env.PORT || 8000; // set a port
    
     // create a route for 'http://localhost:8000/' to serve which returns a string
     app.get('/', (req,res) => {
         return res.json({
             message:"Hey! I am serving from container!"
         });
     });
     // initialize the app
     app.listen(PORT,() => console.log("Server started at port " + PORT))
    

    Test the server whether it is working by running the server with ‘node main.js’:

    This is the app that we are going to containerize.

  2. Create the ‘Dockerfile’
    Next, we need to create a file called ‘Dockerfile’ with no extension. The ‘Dockerfile’ would look like this:

     # install ubuntu as base image
     FROM ubuntu
    
     # install nodejs
     RUN apt-get update
     RUN apt-get install -y curl
     RUN curl -sL https://deb.nodesource.com/setup_18.x | bash -
     RUN apt-get upgrade -y
     RUN apt-get install -y nodejs
    
     # copy server files in current folder inside container
     COPY package.json package.json
     COPY package-lock.json package-lock.json
     COPY main.js main.js
    
     # downloads node modules needed
     RUN npm install
    
     # when the image runs, run the main.js file
     ENTRYPOINT [ "node","main.js" ]
    
     # to build an image from this file, execute the following:
     # docker build -t <name for this image> <location of the Dockerfile ('.' for current directory)>
     # '-t' flag means tag as in giving a tag to the image (i.e. naming the image)
     # after that, refresh the 'images' section in docker desktop
    

    Here, there are some key words to keep in mind:

    • FROM - Specifies base image.

    • RUN - Executes commands inside the image.

    • COPY - Adds files to the image.

    • ENTRYPOINT - Runs the app when container starts.

The image needs a base image to build the application on top it. Why? Because our node application needs to run on top of an OS in order to work. We can use ubuntu for this purpose.

  1. Build the image from ‘Dockerfile’
    Once the ‘Dockerfile’ is ready, run the following command in terminal within the current folder in order to create the custom container image:

     docker build -t mishal-nodejs-server .
    

    It will take some time to create the image and when done, it would look like this:

    We can check the image file using the command - docker images

    Perfect!

  2. Create container from this image
    So our image is ready. We can create a container out of it using the command -
    ‘docker run -it -p 8000:8000 mishal-nodejs-server’

    Now our container is running and if we go to ‘http://localhost:8000/’, we can see the response sent from our container:

  3. Interacting with this container
    We can interact with this container using the command -
    docker exec -it <container id> bash
    Example - I ran the command with the container id and ran ‘ls’ command to see the files inside the container (see screenshot below)

  4. Passing environment variables into container
    We can pass environment variable using ‘-e’ flag followed by the environment variable in the format ‘key=value’.
    For example - we have an environment variable called PORT in our application. In order to change it from 8000 to 3000, we can run a command like this:

    and we can find it to be working by going to ‘localhost:3000’:

With this, we have successfully containerized the nodejs application after making a container image out of the application. This would need a lot of optimization but we will cover that in a different article and keep this one simple and compact.

Pushing Images to Docker Hub

We can push and publish our container images (as public or private) to container registries like docker hub, like how we push code to github. First we need to create an account in docker hub (don’t worry, it’s free). Then login to that account from your local machine and create a repository to store the image.

Then, we need to rebuild our image with a naming convention like ‘<namespace>/<repository name>’ in our local machine.

Once it is built, then we can push to the repository from our local machine with the command -
‘docker push <image name>’

When done, we can refresh our docker hub repository page and see it there

Now, anyone can pull this image and create containers out of it to run our application.

Docker Compose

What if we need to spin up and work with multiple containers at once? In that case, we can make use of ‘docker-compose.yml’ file option to declare our container requirements and all of them will be loaded. Let’s see how:

First, we need to create a file by the name - “docker-compose.yml” - inside our code base. This is where we declare the containers that we require. All required containers are listed under ‘services’ section. Then the required containers can be mentioned one after the other. For each container, we need to mention from which image should the container be built, set port mapping and pass environment variables, if any. Refer to the file below to see an example:

# refers to the containers that we need
services: 
  # any name can be given to the container
  postgres:
    # mention the image from which this container needs to be built
    image: postgres
    # port mapping is applied here
    ports:
      - "5432:5432"
    # environment variables are passed to container here
    environment:
      POSTGRES_USER: postgres
      POSTGRES_DB: review
      POSTGRES_PASSWORD: password
  # the above steps can be repeated for other containers that we want
  redis: 
    image: redis
    ports:
      - "6379:6379"

Once the file is ready, save it, head to the terminal and then run a command (make sure docker daemon is running in the background) -

‘docker compose up’

We could see in the terminal the containers being created as we need them. We can head to the docker desktop to see if them up and running as well. If we want to stop the containers, then either press control button with c or run a command - “docker compose down”

We can add a flag ‘-d’ to any of the docker commands to run them in background. For example -

Conclusion

This article introduced Docker in a beginner-friendly way, explaining what problems it solves, what containers and images are, and how to use essential Docker commands. We containerized a Node.js application, created a custom image, and pushed it to Docker Hub. We also explored Docker Compose for managing multiple containers.

There’s a lot more to Docker—such as volume mounting, multi-stage builds, and CI/CD integration—but we’ll explore those in future articles.


Reference

  1. Piyush Garg’s Docker in one shot

  2. ChaiCode Web Dev Cohort by Hitesh Chaudhary and Piyush Garg

0
Subscribe to my newsletter

Read articles from Mishal Alexander directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mishal Alexander
Mishal Alexander