Day 15: Supercharging My Flask App with NGINX Load Balancing and Docker Compose Cleanup


I’m diving into an exciting session: enhancing my Flask + Redis app with NGINX as a load balancer, removing direct ports for security, and mastering Docker Compose environment cleanup and resource management.
The Mission: Why Day 15 Rocks
After two weeks of conquering Docker basics—think volumes, networks, secrets, health checks, and scaling (Day 14’s load balancing was)—Day 15 is about making my Flask + Redis app leaner, meaner, and more professional. My goals? Add NGINX to handle load balancing for my scaled Flask instances, remove direct port mappings for better security, and polish my cleanup skills with commands like docker compose rm. Let’s break it down into three chunks: hands-on Docker Compose tweaks, cleanup command practice with Grok queries, and blogging updates. Ready? Let’s roll!
1. Hands-On: NGINX Load Balancing and Resource Management (60 Minutes)
First up, I revamped my Flask + Redis app to use NGINX as a load balancer and added resource limits to keep things efficient. My app, living in ~/tmp/flask-redis-app, has a web service (3 replicas for dev), a web-prod service (2 replicas for prod), and a redis service. I also introduced separate nginx-dev (port 8080) and nginx-prod (port 8081) services to route traffic like a pro.
Step 1: Quick Review with Spaced Repetition
To set the stage, I asked Grok: “Explain Docker Compose environment cleanup and resource management in 50 words.” Grok replied: “Docker Compose cleanup (e.g., rm, prune) removes stopped containers, unused volumes, and networks, freeing system resources. Resource management sets CPU/memory limits in docker-compose.yml, optimizing performance. Example: Limit web to 0.5 CPUs, 256M memory, ensuring efficient, scalable Flask apps in 2025.” Nailed it!
Step 2: Updating docker-compose.yml
I navigated to my app directory:
cd ~/tmp/flask-redis-app
nano docker-compose.yml
I revised the file to remove ports from web and web-prod, adding nginx-dev and nginx-prod services. Here’s a snippet of the key changes:
services:
web:
build: .
environment:
- REDIS_HOST=redis-service
- APP_TITLE=${APP_TITLE}
secrets:
- redis_password
depends_on:
redis:
condition: service_healthy
restart: true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
profiles:
- dev
networks:
app-net:
aliases:
- flask-service
deploy:
replicas: 3
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.5'
memory: 256M
reservations:
cpus: '0.25'
memory: 128M
redis:
image: redis:latest
command: redis-server --requirepass ${REDIS_PASSWORD}
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
secrets:
- redis_password
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 30s
timeout: 5s
retries: 3
start_period: 5s
logging:
driver: json-file
options:
max-size: "5m"
max-file: "2"
profiles:
- dev
- prod
networks:
app-net:
aliases:
- redis-service
deploy:
resources:
limits:
cpus: '0.3'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
web-prod:
build:
context: .
dockerfile: Dockerfile.prod
environment:
- REDIS_HOST=redis-service
- APP_TITLE=${APP_TITLE_PROD}
secrets:
- redis_password
depends_on:
redis:
condition: service_healthy
restart: true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 15s
timeout: 5s
retries: 5
start_period: 5s
logging:
driver: json-file
options:
max-size: "20m"
max-file: "5"
profiles:
- prod
networks:
app-net:
aliases:
- flask-prod-service
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.7'
memory: 512M
reservations:
cpus: '0.4'
memory: 256M
nginx-dev:
image: nginx:latest
volumes:
- ./nginx-dev.conf:/etc/nginx/nginx.conf:ro
ports:
- "8080:80"
depends_on:
- web
networks:
app-net:
aliases:
- nginx-dev-service
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
profiles:
- dev
nginx-prod:
image: nginx:latest
volumes:
- ./nginx-prod.conf:/etc/nginx/nginx.conf:ro
ports:
- "8081:80"
depends_on:
- web-prod
networks:
app-net:
aliases:
- nginx-prod-service
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
profiles:
- prod
networks:
app-net:
driver: bridge
name: flask-app-net
volumes:
redis-data:
secrets:
redis_password:
file: ./redis_password.txt
What’s new? No ports for web or web-prod—traffic now flows through NGINX. I added resource limits (0.5 CPUs, 256M memory for web; 0.7 CPUs, 512M for web-prod; 0.2 CPUs, 128M for NGINX). The redis service and app-net network stayed intact.
Step 3: NGINX Configs
I created two NGINX configs for load balancing:
- nginx-dev.conf (for web):
http {
upstream flask_dev {
server flask-service:5000 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
location / {
proxy_pass http://flask_dev;
proxy_set_header Host $host;
}
location /health {
proxy_pass http://flask_dev;
}
}
}
- nginx-prod.conf (for web-prod): Similar, but targeting flask-prod-service:5000.
These configs route traffic to the scaled replicas, with max_fails=3 ensuring NGINX skips unhealthy instances.
Step 4: Updating Dockerfiles
For the web service (dev), I used:
FROM python:3.9-slim
WORKDIR /app
COPY app/ .
RUN apt-get update && apt-get install -y curl redis-tools && pip install flask redis && apt-get clean
CMD ["./wait-for-redis.sh"]
For web-prod, I added gunicorn:
FROM python:3.9-slim
WORKDIR /app
COPY app/ .
RUN apt-get update && apt-get install -y curl redis-tools && pip install flask redis gunicorn && apt-get clean
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "app:app"]
The wait-for-redis.sh script ensures web waits for Redis:
#!/bin/sh
until redis-cli -h redis-service -a $REDIS_PASSWORD ping; do
echo "Waiting for Redis..."
sleep 2
done
exec python app.py
Step 5: Testing the Setup
I verified .env:
cat .env
# REDIS_HOST=redis-service
# APP_TITLE=App from .env
# APP_TITLE_PROD=Production Flask App
# REDIS_PASSWORD=supersecretpassword
Then, I tested the dev profile:
bash
docker compose --profile dev up -d --build --wait
docker stats --no-stream
Result: web (3 replicas) used ~0.5 CPUs, <256M memory; nginx-dev <128M. I hit the endpoint:
curl http://localhost:8080
# “App from .env: Visited X times.”
curl http://localhost:8080/health
# “OK”
For prod:
docker compose --profile prod up -d --build --wait
curl http://localhost:8081
# “Production Flask App: Visited X times.”
Load balancing worked—requests spread across replicas! I ran my cleanup script from Day 15:
cat cleanup.sh
#!/bin/bash
echo "Stopping and removing Flask app containers..."
docker compose --profile $1 down
echo "Removing unused volumes..."
docker volume prune -f
echo "Removing unused networks..."
docker network prune -f
echo "Removing unused images..."
docker image prune -f
echo "Cleanup complete!"
./cleanup.sh dev
# Containers, unused volumes, networks, images removed; redis-data persisted.
2. Mastering Cleanup Commands and Grok Queries (45 Minutes)
Next, I dove into cleanup commands and used Grok to deepen my understanding. I practiced:
docker compose --profile dev up -d
docker compose rm -f
docker system prune -f
docker volume ls
# redis-data persists
I asked Grok: “Explain Docker Compose resource cleanup commands in 100 words.” Grok’s response was gold: “Commands like docker compose rm -f remove stopped services, freeing resources. docker system prune -f clears unused containers, images, and networks, while docker volume prune -f removes unused volumes (except named ones like redis-data). These ensure a clean environment, preventing disk bloat and memory leaks. Best practice: Run cleanup scripts post-deployment to maintain efficiency, especially in 2025’s resource-conscious DevOps workflows.”
I also used DeepSearch on X: “Find X posts on Docker Compose cleanup in 2025.” I summarized a post in day_15_notes.txt: “@docker shares: ‘Automate cleanup with docker system prune in CI/CD pipelines for 2025 efficiency. Saves disk space!’” I tested a stress scenario by lowering web memory to 64M, ran 10 curl requests, and checked logs for OOM errors. Spoiler: It held up, but barely!
Subscribe to my newsletter
Read articles from Usman Jap directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
