Dockerizing a 3-Tier MERN Application

Welcome to my hands-on journey with Docker! This blog is all about learning how to containerize a 3-tier MERN (MongoDB, Express, React, Node.js) application. Whether you’re new to Docker or looking to sharpen your skills, this step-by-step guide will walk you through the process.

GitHub Repository: Dockerize MERN Application


Application Overview

Our MERN application has the following structure:

├── client
│   └── (Frontend files and Dockerfile)
├── server
│   └── (Backend files and Dockerfile)
├── docker-compose.yml
└── README.md

We’ll containerize the three layers of the application:

  1. MongoDB: The database layer.

  2. Backend: A Node.js server for API handling.

  3. Frontend: A React application for the user interface.


Step 1: Setting Up Docker Network

A custom network ensures that containers communicate internally.

Command:

docker network create todo-mern

This creates a bridge network named todo-mern, which allows containers to talk to each other directly.


Step 2: Setting Up MongoDB

We’ll now run MongoDB as a container.

Command:

docker run -d -p 27017:27017 --name mongodb --net todo-mern mongo

Explanation:

  • Detached Mode (-d): Runs MongoDB in the background.

  • Port Mapping (-p): Exposes port 27017 on the host machine.

  • Network (--net): Connects MongoDB to the todo-mern network.

Bonus Tip: Test MongoDB by entering the container and launching the Mongo shell:

docker exec -it mongodb bash  
mongosh

Step 3: Building the Backend Service

Dockerfile for the Backend

Here’s the Dockerfile for the backend:

FROM node:18
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 8081
CMD ["npm", "start"]

Steps to Build and Run:

  1. Build the backend image:

     docker build -t todo-server-image ./server
    
  2. Run the backend container:

     docker run -d -p 8081:8081 -e MONGODB_URI=mongodb://mongodb:27017/todoApp --name todo-server --net todo-mern todo-server-image
    

Explanation:

  • MONGODB_URI: Environment variable for connecting to the MongoDB container.

  • Port Mapping: Exposes the API on port 8081.


Step 4: Building the Frontend Service

Dockerfile for the Frontend

FROM node:18
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 5173
CMD ["npm", "run", "dev", "--", "--host"]

Steps to Build and Run:

  1. Build the frontend image:

     docker build -t todo-client-image ./client
    
  2. Run the frontend container:

     docker run -d -p 5713:5713 -e VITE_APP_BACKEND_API=http://localhost:8081 --name todo-client --net todo-mern todo-client-image
    

Explanation:

  • VITE_APP_BACKEND_API: Connects the frontend to the backend server.

  • Host Mode: Ensures the frontend is accessible outside the container.


Step 5: Simplifying with Docker Compose

Managing multiple containers is easier with Docker Compose.

docker-compose.yml File:

version: "3"
services:
  mongodb:
    container_name: mongo
    image: mongo:latest
    ports:
      - "27017:27017"

  backend:
    container_name: server
    build: ./server
    environment:
      - MONGODB_URI=mongodb://mongo:27017/todoApp
    ports:
      - "8081:8081"
    depends_on:
      - mongodb

  frontend:
    container_name: client
    build: ./client
    environment:
      - VITE_APP_BACKEND_API=http://localhost:8081
    ports:
      - "5713:5713"
    depends_on:
      - backend

How to Use It:

  1. Start the entire stack:

     docker-compose up --build
    
  2. Stop the containers:

     docker-compose down
    

Benefits of Docker Compose:

  • Single command to start/stop the entire application.

  • Automatically handles dependencies (e.g., MongoDB starts before the backend).


Commands Cheat Sheet

Here are some additional useful commands:

  1. List all running containers:

     docker ps
    
  2. View container logs:

     docker logs <container-name>
    
  3. Stop and remove all containers:

     docker-compose down
    
  4. Prune unused containers:

     docker container prune
    

What’s Next?

After completing this project, my next steps are to:

  • Explore more features of Docker: Dive deeper into advanced features like multi-stage builds, custom networks, health checks, and more.

  • Start learning the cloud: I plan to focus on platforms like GCP and AWS, exploring topics such as Secret Manager, VPC, Interconnect, Serverless services, Data Warehousing, and more.

Check out my full blog post for all the details 👉 Hashnode Article

I hope this inspires you to try Dockerizing your projects too! If you have any feedback or questions, feel free to drop a comment. 😊

0
Subscribe to my newsletter

Read articles from Abhishek Prajapati directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abhishek Prajapati
Abhishek Prajapati

I am a passionate Software Engineer with over two years of experience in the tech industry, specializing in Full Stack Development and DevOps practices. Currently, I work on innovative projects in the automotive sector, where I develop and automate solutions that enhance user experience and streamline operations. With a keen interest in cloud technologies, automation, and infrastructure management, I am dedicated to mastering the DevOps landscape. I believe in simplifying complex concepts and sharing practical insights to help others in their learning journeys. Join me as I document my experiences, challenges, and triumphs in the world of DevOps!