Day 19 of 90 Days of DevOps Challenge: Introduction to Docker

Vaishnavi DVaishnavi D
4 min read

Imagine you're developing a web application. To run it, you need to install Angular, Java, MySQL, and Tomcat, not just on your local machine, but also across different environments, such as development, testing, and production. Every time you move your app to a new machine, you risk running into version mismatches, missing dependencies, or configuration errors.

This setup process is not only time-consuming but error-prone. And that’s exactly why Docker was created to simplify and standardize application deployment across all environments.

Understanding Application Architecture

Any modern application typically includes:

  • Frontend: for the user interface

  • Backend: for the business logic

  • Database: for storing and managing data

For example, an application might use Angular 16 for frontend, Java 17 for backend, MySQL Server 8.5 for the database, and Tomcat 9.0 as the web server.

To run this application, we need all these dependencies installed and properly configured on each machine, a major hassle in real-world environments.

Real-Time Application Environments

In the software development lifecycle, an application is tested in multiple environments:

  • DEV (Development) – Developers test and integrate their code

  • SIT (System Integration Testing) – Testers validate end-to-end system behavior

  • UAT (User Acceptance Testing) – Clients verify functionality before launch

  • PILOT – Pre-production checks to mimic live conditions

  • PROD (Production) – Final live version used by end-users

Each environment must be consistent. Setting up and maintaining dependencies manually across them all can lead to mistakes, inconsistencies, and downtime.

What is Docker?

Docker is a free, open-source platform designed for containerization. It lets you package your application code along with all required software, libraries, and dependencies into a lightweight unit called a container. A Docker container can run consistently on any machine, whether it's your laptop, a server, or the cloud.

With Docker:

  • No more worrying about software installations

  • No more compatibility issues

  • No more "It works on my machine" headaches

Docker Architecture

Let’s break down how Docker works under the hood:

1. Dockerfile

At the core of Docker is the Dockerfile , a text document that contains instructions for assembling a Docker image. It specifies:

  • The base OS or runtime (like Ubuntu, Node.js, Java)

  • Where to copy the app code

  • Commands to install dependencies

  • Instructions to run the app

2. Docker Image

From the Dockerfile, we build a Docker Image. This is a snapshot of your application code and its environment, bundled together.
It's a read-only, portable unit that includes:

  • App code

  • Runtime (e.g., Java, Python)

  • Libraries, dependencies

  • Environment variables or configurations

3. Docker Registry

Once the image is ready, it can be pushed to a Docker Registry like Docker Hub. This serves as:

  • A central storage for your images

  • A way for team members and systems to pull images when needed

  • A great tool for CI/CD pipelines

This makes deploying consistent versions across environments simple and scalable.

4. Docker Container

When an image is executed, it becomes a Docker Container – the live, running instance of the application.
A container:

  • Isolated from the host OS

  • Has its own filesystem and environment

  • Uses the image as a base but is writable

  • Can be started/stopped independently

Containers are lightweight and efficient – you can spin up hundreds without the overhead of VMs.

How the Pieces Fit Together

Here’s the typical flow of using Docker in real-world scenarios:

  1. Write a Dockerfile describing app dependencies

  2. Build an image using the Dockerfile

  3. Push the image to Docker Hub (or any private registry)

  4. Pull the image on any server or environment

  5. Run it as a container using a simple docker run command

Installing Docker on Linux (EC2)

  1. Launch an EC2 instance on AWS (Amazon Linux).

  2. Run the following commands:

     sudo yum update -y            # Update all system packages to the latest version without asking for confirmation
     sudo yum install docker -y    # install Docker 
     sudo service docker start     # Start Docker  
     docker -v                     # check Docker version
    
  3. Add your user to the Docker group:

     sudo usermod -aG docker ec2-user   # Add user to Docker Group
     exit                               # Exit
    

Useful Docker Commands

  • docker images – List all available images

  • docker ps – Show running containers

  • docker ps -a – Show all containers (running and stopped)

  • docker pull <image> – Download image from Docker Hub

  • docker run <image> – Create and start a container

  • docker run -d <image> – Run in detached mode

  • docker stop <container> – Stop a container

  • docker start <container> – Start a stopped container

  • docker rm <container> – Remove a container

  • docker rmi <image> – Remove an image

  • docker system prune -a – Clean up unused containers, images, and cache

Note: You cannot remove an image if a container created from it still exists. Delete the container first, or use --force to remove both.

Final Thoughts

Getting started with Docker was a game-changer. It simplified the entire setup process and made deployments feel effortless. Running my app inside a container for the first time felt empowering, like I finally had control over consistency across different environments. I'm excited to keep exploring and containerize more complex projects in the days ahead.

0
Subscribe to my newsletter

Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vaishnavi D
Vaishnavi D