Mastering Docker Compose: Building Two-Tier Project.

Uday SharmaUday Sharma
5 min read

Introduction to Docker Compose:

Docker Compose is a tool used for defining and running multi-container Docker applications. It allows you to define a multi-container environment in a YAML file, specifying services, networks, and volumes, and then use a single command to start and run the entire environment.

Here's a basic introduction to Docker Compose:

Key Concepts:

  1. Service: A service is a containerized application or component defined in the docker-compose.yml file. Each service can have its own configuration, including the Docker image, environment variables, and more.

  2. Container: A container is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools.

  3. docker-compose.yml: This is the configuration file for Docker Compose. It defines all the services, networks, and volumes needed for the application. It uses a simple YAML syntax to specify configurations.

Basic Structure of docker-compose.yml:

version: '3'  # Version of the Docker Compose file format

services:
  service1:  # Name of the service
    image: nginx:latest  # Docker image for the service
    ports:
      - "8080:80"  # Port mapping
    environment:
      - ENV_VAR=value  # Environment variables

  service2:
    image: postgres:latest
    environment:
      - POSTGRES_DB=mydatabase
      - POSTGRES_USER=myuser
      - POSTGRES_PASSWORD=mypassword

networks:
  mynetwork:  # Definition of custom networks if needed

volumes:
  myvolume:  # Definition of volumes if needed

Basic Commands:

  • docker-compose up: Start the services defined in the docker-compose.yml file.

  • docker-compose down: Stop and remove the containers, networks, and volumes defined in the docker-compose.yml file.

  • docker-compose ps: List the status of the services.

  • docker-compose logs: View the logs of the running services.

  • docker-compose exec: Run commands in a running container.

YAML Configuration:

Docker Compose uses a YAML file, usually named docker-compose.yml, to define your application's services, networks, and volumes. This human-readable configuration provides a clear overview of your application's structure.

Services and Containers:

Services in Docker Compose represent containerized components of your application. These services can be easily managed, scaled, and interconnected. Containers provide isolated environments for your applications to run.

Networks and Volumes:

In Docker Compose, networks and volumes are essential concepts that help you manage the communication between containers and persist data. Let's delve into each of these concepts:

Networks:

Definition indocker-compose.yml:

version: '3'

services:
  service1:
    image: nginx:latest
    networks:
      - mynetwork

  service2:
    image: postgres:latest
    networks:
      - mynetwork

networks:
  mynetwork:
    driver: bridge

In the above example:

  • mynetwork is a user-defined bridge network.

  • Both service1 and service2 are connected to this network.

  • This enables communication between containers on the same network.

Key Points:

  1. Bridge Network (Default): By default, Docker Compose creates a bridge network for your application. Containers connected to this network can communicate with each other.

  2. User-Defined Networks: You can create custom bridge networks with specific configurations. This helps in organizing and isolating communication between containers.

  3. Service-to-Service Communication: Containers within the same network can refer to each other using the service names defined in the docker-compose.yml file.

  4. External Communication: If a container needs to communicate with a service outside the Docker Compose file, it can do so using the external hostname or IP address.

Volumes:

Definition indocker-compose.yml:

version: '3'

services:
  service1:
    image: nginx:latest
    volumes:
      - myvolume:/app/data

  service2:
    image: postgres:latest
    volumes:
      - myvolume:/var/lib/postgresql/data

volumes:
  myvolume:

In this example:

  • myvolume is a named volume.

  • Both service1 and service2 share this volume, allowing them to persist data and share it between containers.

Key Points:

  1. Named Volumes: Docker Compose supports named volumes, which are managed by Docker and persist data even if containers are removed.

  2. Volume Mounting: Containers can mount volumes at specified paths, enabling them to read and write data to shared locations.

  3. Data Persistence: Volumes provide a way to persist data generated or modified by containers. This is crucial for databases, file storage, and other scenarios where data needs to outlive container lifetimes.

  4. Volume Drivers: Docker supports various volume drivers, allowing you to use different backends for volume storage (e.g., local, NFS, AWS EBS).

Understanding and effectively using networks and volumes in Docker Compose is vital for designing scalable, maintainable, and reliable containerized applications. These features contribute significantly to the flexibility and robustness of container orchestration.

Creating a Beautiful Two-Tier Project with Docker Compose.

Prerequisites

Before we drive into the exciting world of Docker Compose, make sure you have the following set up:

  • An EC2 instance on AWS to run Docker containers.

  • Basic familiarity with the command line and SSH.

  • Docker and Docker Compose installed on your EC2 instance.

  • A desire to create and deploy awesome containerized applications!

How to make EC2 Instace, please go to previous blog. Now, let's embark on this Docker Compose journey and create something amazing!

Tools Required:

Docker: For creating and managing containers.

Docker Compose: For defining and running multi-container Docker applications.

Docker Scout: For scanning Docker images for vulnerabilities.

Any code editor (like Visual Studio Code, Atom, etc.) Access to a basic two-tier application source code (e.g., a simple web app with a database. backend).

Overview/Description:

This project involves containerizing a two-tier application (such as a web application with a database) using Docker and orchestrating the containers using Docker Compose. The project will also include using Docker Scout to scan the created Docker images for security vulnerabilities. This will give learners practical experience in containerization, orchestration, and security aspects of Dockerized applications.

Code Repository:

Repository Platform: GitHub Repository Link: To be provided by the instructor or set up by the earner

Access Instructions: Clone the repository to your local machine using Git. Instructions on cloning a repository can be found on GitHub's help pages.

Functional Requirements:

Containerize each component of the two-tier application using Docker.

Use Docker Compose to define and run the multi-container application.

Ensure network communication between containers (e.g., web app

container communicating with the database container).

Scan the Docker images with Docker Scout and address any reported vulnerabilities.

Non-Functional Requirements:

Performance: The containers should be optimized for performance, considering aspects like image size and startup time.

Security: Implement best practices for Docker security, including managing secrets and using least privilege principles.

Documentation: Provide a README file with clear instructions on how to build, run, and scan the application.

Thank you, We have completed 3 projects in Docker. Next blog we well start new topic in Docker.

1
Subscribe to my newsletter

Read articles from Uday Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Uday Sharma
Uday Sharma

This blog is exclusively dedicated to DevOps, aimed at enhancing the community's knowledge. I am eager to contribute by sharing insights and lessons learned from my specific expertise in DevOps, AWS, and Azure. With a clear understanding of DevOps challenges in the IT industry, I am currently overseeing an AWS monitoring project at Coforge, leveraging 2+ years of hands-on experience. My primary interest lies in continually learning about new DevOps challenges and solutions.