Three tier docker architecture

Keshav JhalaniKeshav Jhalani
7 min read

In this article, we’ll explore the core concepts of Docker by containerizing a three-tier web application, consisting of:

  • React (frontend)

  • Node.js (backend)

  • MySQL (database)

We'll be working on a local Arch Linux system, but the process is nearly the same across other Linux distributions or even in the cloud.

Let’s get started.

Open your terminal — whichever one you prefer.


1) Docker Installation

To install Docker on Arch-based systems, use:

sudo pacman -S docker

For Debian-based systems, install docker.io instead.
On most other distributions, the package is simply named docker.

Throughout this blog, we’ll use pacman commands. If you're on a different distro, substitute with your system's package manager (e.g., apt, dnf, apk, etc.).

Once installed, verify Docker with:

docker -v

Start the Docker service:

sudo systemctl start docker.service

Enable it to start automatically on boot:

sudo systemctl enable docker.service

Check its current status:

sudo systemctl status docker.service


2) Code setup

You can copy the entire project from the following GitHub repository:

https://github.com/Keshav005Jhalani/three-tier-docker


3) Dockerfile

What are containers and dockerfiles?

  • A Dockerfile is like a blueprint for building a Docker container.

  • A container is exactly what it sounds like — a lightweight, standalone package that contains everything required to run your application, including code, runtime, libraries, and dependencies.

  • Containers are lightweight because they share the host system’s kernel and binaries. They utilize Linux namespaces and cgroups for isolation.

Backend Dockerfile

FROM node:lts
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["node", "index.js"]

Explanation:

  • FROM node:lts
    Uses the official Node.js image with Long-Term Support. For production, it's recommended to use a specific version, but for demos, lts is fine. These images are available on Docker Hub, a public container registry.

  • WORKDIR /app
    Sets the working directory inside the container to /app. If it doesn’t exist, it’s created. All subsequent commands operate within this directory.

  • COPY package*.json ./
    Copies package.json and package-lock.json (if present) from local to docker container. This is done separately to take advantage of layer caching — Docker can skip reinstalling dependencies in the next step if these files haven’t changed.

  • RUN npm install
    Installs the Node.js dependencies inside the container.

  • COPY . .
    Copies the rest of your backend code into the container.

  • EXPOSE 5000
    Indicates that the application will listen on port 5000. This is for documentation only — actual port publishing happens when you run the container with -p.

  • CMD ["node", "index.js"]
    Specifies the command to run when the container starts. This launches your backend server.

    Note: Unlike ENTRYPOINT, CMD can be overridden at runtime.

Frontend Dockerfile

FROM node:lts AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

FROM nginx:alpine
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

Explanation:

  • FROM node:lts AS builder
    Starts a multi-stage build. This stage installs dependencies and builds the production-ready React app. Naming the stage builder allows us to copy build files from it in later stages.

  • WORKDIR /app
    Sets the working directory.

  • COPY package*.json ./
    Copies dependency-related files for caching benefits.

  • RUN npm install
    Installs React app dependencies.

  • COPY . .
    Copies the rest of the React project files.

  • RUN npm run build
    Builds the React app for production. The output is stored in the build/ directory, containing optimized static files.

  • FROM nginx:alpine
    Switches to a new, lightweight NGINX image. This keeps the final container small and efficient for serving static files.

  • RUN rm -rf /usr/share/nginx/html/*
    Clears NGINX’s default serving directory.

  • COPY --from=builder /app/build /usr/share/nginx/html
    Copies the build output from the first stage (builder) to the NGINX serving directory.

  • COPY nginx.conf /etc/nginx/conf.d/default.conf
    Replaces the default NGINX configuration with your custom configuration.

  • EXPOSE 80
    Documents that NGINX will serve traffic on port 80.

Multi-stage builds helps reducing overall size of the imgaes. We can think of it as, some dependencies need proper environment to download but doesn’t need it during runtime so we copy these dependencies to later. For example in our case node dependencies need proper node environment so we use node image, after the dependencies are loaded no need for heavy node environment so we copy those dependencies to a new nginx image that too alpine so that final image doesn’t have anything extra which takes extra space which wasn’t needed.

4) Docker Compose

This is the docker-compose.yml file, which defines and manages our multi-container application.

version: '3.8'

services:
  db:
    image: mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: my-secret-pw 
      MYSQL_DATABASE: testdb
    networks:
      - app-net
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5

  backend:
    build:
      context: ./backend
    restart: always
    depends_on:
      db:
        condition: service_healthy
    networks:
      app-net:
    environment:
      DB_HOST: db
      DB_NAME: testdb
      DB_USER: root
      DB_PASS: my-secret-pw
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/api/users"]
      interval: 10s
      timeout: 5s
      retries: 5

  frontend:
    build:
      context: ./frontend
    restart: always
    depends_on:
      backend:
        condition: service_healthy
    ports:
      - "80:80"
    networks:
      - app-net
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/"]
      interval: 10s
      timeout: 5s
      retries: 5

networks:
  app-net:

Docker Compose Overview

Docker Compose is a tool used to define and run multi-container Docker applications. Instead of manually starting each container one by one, Compose lets you automate everything from build to startup using a single file.

This setup defines three services(containers):

  • db – MySQL database

  • backend – Node.js API server

  • frontend – React frontend served via NGINX

All services are connected through a custom bridge network: app-net.

Service Breakdown:

i. db (MySQL Database)

  • Uses the official mysql image.

  • restart: always ensures the container restarts automatically on failure.

  • Environment variables:

    • MYSQL_ROOT_PASSWORD sets the root password.

    • MYSQL_DATABASE creates a database named testdb.

  • Mounts the init.sql script to initialize the database with schema and data.

  • Connected to the app-net network.

  • A health check ensures MySQL is up and responsive before allowing dependent containers to start.

ii. backend (Node.js API Server)

  • Built from the Dockerfile located in the ./backend directory.

  • Waits for the database to become healthy using depends_on.

  • Shares the same app-net network.

  • Environment variables provide database credentials and host info.

  • A health check pings http://localhost:5000/api/users to confirm the server is running.

  • This service does not expose ports externally—it communicates internally within the Docker network.

iii. frontend (React + NGINX)

  • Built from the Dockerfile in the ./frontend directory.

  • Waits for the backend service to be healthy.

  • Maps container port 80 to host port 80, making the application available at http://localhost (on our local system).

  • Also connected to app-net.

  • Health check validates if the frontend is serving content correctly.

How to Run the Setup:

docker-compose up

docker compose up only in project directory where this yml file is present. Also everytime you chnages anything in code just dont forget to use --build tag in the above command so that it agiain build the image and incorporate any chnage you have made in your code.

After the services are up, access the application in your browser at:

http://localhost

5) Bonus Edit:

Routing and Nginx

If you look at the nginx.conf file, you'll notice that we have defined two routes:

  1. Frontend Route (Static Files)
    The first route is straightforward and comes by default. It specifies that any request to / should be served from /usr/share/nginx/html. This is the directory where Nginx looks for static frontend build files. These files are served on port 80 inside the container, which is mapped to port 80 on the host machine.

  2. API Route (Backend Service)
    The second route handles requests to /api/. When such a request comes to the frontend container, Nginx forwards it to the backend service running at http://backend:5000.
    For
    example, in our project, the frontend (in frontend/src/App.js) makes a GET request to /api/users to fetch user names. Nginx intercepts this and routes it to the backend as a GET request to http://backend:5000/api/users.
    On
    the backend side (backend/index.js), we have a route handler for this endpoint. It queries the MySQL database for user data and returns the response back to the frontend.

  3. IIn the nginx.conf file, when we route traffic to the backend URL—such as http://backend/api/user ,term backend acts as the domain name for the backend service. This works because Docker sets up an internal DNS system where each service name can be resolved as a hostname by other services on the same network.

    Additionally, we can define a network alias for the backend container in the Docker Compose file and use that alias as the domain name instead. In Docker, a service can be accessed using its service name, IP address, or network alias, as long as all containers are part of the same Docker network.

11
Subscribe to my newsletter

Read articles from Keshav Jhalani directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Keshav Jhalani
Keshav Jhalani