Day 14 Blog: Scaling My Flask App with Docker Compose and Nginx Load Balancing – A Wild Ride!

Usman JapUsman Jap
4 min read

Picture me sipping coffee at 3:30 PM WIB, ready to make my app handle traffic like a rockstar. Spoiler: there were some “oops” moments, but we got there!

The Mission: Scaling Like a Superhero

By Day 13, my Flask + Redis app was a well-oiled machine, rocking custom networks, health checks, secrets, and startup order with depends_on. But today? It’s time to level up! Day 14’s goal was to scale my web service to multiple instances, use Docker Compose’s replicas for load distribution, and add Nginx as a load balancer to handle traffic like a pro. I also wanted to ditch direct port mappings for web and web-prod, letting Nginx take the spotlight on port 8080.

Hands-On: Scaling with Nginx Load Balancing

Let’s get to the juicy part: scaling my Flask app and routing traffic through Nginx. I hit a snag with port 8080 refusing connections (classic Docker drama!), so I revised my approach to use Nginx as a load balancer, removing ports from web and web-prod services. Here’s how it went down, step-by-step, with a sprinkle of trial and error.

Update docker-compose.yml with Nginx.

I navigated to my Flask app directory:

cd ~/tmp/flask-redis-app

Then, I opened docker-compose.yml with nano and added an nginx service to load balance my web and web-prod services, removing their ports mappings. Here’s the revised config (simplified for clarity):

services:
  web:
    build: .
    environment:
      - REDIS_HOST=redis-service
      - APP_TITLE=${APP_TITLE}
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    secrets:
      - redis_password
    depends_on:
      redis:
        condition: service_healthy
        restart: true
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    profiles:
      - dev
    networks:
      app-net:
        aliases:
          - flask-service
    deploy:
      replicas: 3
      restart_policy:
        condition: on-failure
  redis:
    image: redis:latest
    command: redis-server --requirepass ${REDIS_PASSWORD}
    environment:
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    secrets:
      - redis_password
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 5s
    profiles:
      - dev
      - prod
    networks:
      app-net:
        aliases:
          - redis-service
  web-prod:
    build:
      context: .
      dockerfile: Dockerfile.prod
    environment:
      - REDIS_HOST=redis-service
      - APP_TITLE=${APP_TITLE_PROD}
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    secrets:
      - redis_password
    depends_on:
      redis:
        condition: service_healthy
        restart: true
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 15s
      timeout: 5s
      retries: 5
      start_period: 5s
    logging:
      driver: json-file
      options:
        max-size: "20m"
        max-file: "5"
    profiles:
      - prod
    networks:
      app-net:
        aliases:
          - flask-prod-service
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
  nginx:
    image: nginx:latest
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - "8080:80"
    depends_on:
      - web
      - web-prod
    networks:
      app-net:
    profiles:
      - dev
      - prod
networks:
  app-net:
    driver: bridge
    name: flask-app-net
volumes:
  redis-data:
secrets:
  redis_password:
    file: ./redis_password.txt

Key Changes:

  • Removed ports from web and web-prod to prevent direct host access.

  • Added nginx service, mapping port 8080:80 to expose Nginx on the host.

  • Mounted a custom nginx.conf for load balancing.

  • Set depends_on to ensure Nginx starts after web/web-prod.

Next, I created nginx.conf:

nano nginx.conf
events {}
http {
  upstream flask_app {
    server flask-service:5000;
    server flask-prod-service:5000;
  }
  server {
    listen 80;
    location / {
      proxy_pass http://flask_app;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
    }
  }
}

This config load balances between flask-service (dev) and flask-prod-service (prod) on port 5000.

Verify Flask App and Dependencies (10 minutes)

I checked app/app.py to ensure it handles shared Redis state (atomic incr for visits) and verified Dockerfile.prod for Gunicorn:

cat Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY app/ .
RUN apt-get update && apt-get install -y curl redis-tools && pip install flask redis && apt-get clean && rm -rf /var/lib/apt/lists/*
CMD ["./wait-for-redis.sh"]

All good! I also confirmed .env settings:

bash

cat .env
REDIS_HOST=redis-service
APP_TITLE=App from .env
APP_TITLE_PROD=Production Flask App
REDIS_PASSWORD=supersecretpassword

Step 4: Run and Test Scaling (25 minutes)

I fired up the dev profile:

bash

docker compose --profile dev up -d --build --wait

Checked services:

bash

docker compose ps

Saw three web instances, one redis, and one nginx, all Up (healthy). I tested load balancing via Nginx:

bash

for i in {1..6}; do curl http://localhost:8080; sleep 1; done

Boom! “App from .env: Visited X times.” with incrementing counts, spread across web instances. I checked logs:

bash

docker compose logs web

Requests were nicely distributed. For prod, I switched profiles:

bash

docker compose --profile dev down
docker compose --profile prod up -d --build --wait

Tested:

bash

for i in {1..4}; do curl http://localhost:8080; sleep 1; done

“Production Flask App: Visited X times.” worked like a charm. I dynamically scaled:

bash

docker compose --profile prod up -d --scale web=5

Five web instances appeared (docker compose ps). Cleaned up:

bash

docker compose --profile prod down

Mastering Commands and Grok Queries (45 minutes)

Next, I practiced scaling commands like a Docker ninja:

bash

docker compose --profile prod up -d --scale web=2
docker stats --no-stream
docker compose --profile prod up -d --remove-orphans

I stress-tested with 10 concurrent requests:

bash

for i in {1..10}; do curl http://localhost:8080 & done

Logs showed even distribution—Nginx was killing it! I leaned on Grok for insights, asking:

  • “Explain best practices for Docker Compose scaling in 150 words, with a 2025 example.”

  • “How does load balancing work in Docker Compose in 100 words?”

  • “Step-by-step, how does Docker Compose handle service scaling?” (using think mode).

0
Subscribe to my newsletter

Read articles from Usman Jap directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Usman Jap
Usman Jap