Day 23: Supercharging My Flask App with Docker Compose Advanced Logging

Usman JapUsman Jap
4 min read

Why Advanced Logging?

Logs are the heartbeat of any app—they tell you what’s working, what’s broken, and sometimes, what’s just being dramatic. But unmanaged logs can bloat your disk or make debugging a nightmare. Today’s goal? Enhance my Flask + Redis app with log rotation to save space, optimize Fluentd for centralized logging, and use JSON formatting for easy analysis.

Enhancing the Flask App with Logging Magic

Step 1: Log Rotation and JSON Logging

I navigated to my Flask + Redis app directory:

cd ~/tmp/flask-redis-app

Then, I opened docker-compose.yml with nano to add log rotation for the web-prod service:

web-prod:
  # ... existing config ...
  logging:
    driver: json-file
    options:
      max-size: "10m"  # Rotate at 10MB
      max-file: "3"    # Keep 3 files
      compress: "true" # Compress logs

This setup ensures logs rotate at 10MB, keeping only three files and compressing them to save space. For web and redis, I optimized Fluentd logging:

web:
  # ... existing config ...
  logging:
    driver: fluentd
    options:
      fluentd-address: fluentd:24224
      tag: flask.app
      fluentd-buffer-limit: "512k"  # Smaller buffer
      fluentd-retry-wait: "500ms"   # Faster retries
      fluentd-async: "true"         # Async logging
redis:
  # ... existing config ...
  logging:
    driver: fluentd
    options:
      fluentd-address: fluentd:24224
      tag: redis.app
      fluentd-buffer-limit: "512k"
      fluentd-retry-wait: "500ms"
      fluentd-async: "true"

These settings reduce buffer size, speed up retries, and enable asynchronous logging for better performance. Next, I updated fluent.conf for JSON output:

nano fluent.conf
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<match *.app>
  @type file
  path /fluentd/log/app.log
  format json
  time_format %Y-%m-%dT%H:%M:%S%z
  compress gzip
  <buffer>
    timekey 1h
    timekey_wait 10m
    flush_interval 10s
  </buffer>
</match>

This config outputs logs in JSON, rotates them hourly, and compresses them. It’s like giving my logs a tidy, machine-readable wardrobe!

Step 2: Structured Logging in Flask

To make my Flask app Fluentd-friendly, I updated app/app.py:

nano app/app.py
import os
import json
import logging
from flask import Flask
from redis import Redis, RedisError
from prometheus_client import Counter, Histogram, generate_latest
app = Flask(__name__)
redis_host = os.getenv('REDIS_HOST', 'redis-service')
app_title = os.getenv('APP_TITLE', 'Default App')
app.config['DEBUG'] = os.getenv('FLASK_ENV', 'production') == 'development'
# Structured JSON logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('{"time":"%(asctime)s","level":"%(levelname)s","message":"%(message)s"}'))
logger.addHandler(handler)
logger.info(f"Starting Flask app with instance ID: {os.getpid()}")
# ... rest of app.py ...

This adds JSON-structured logging, making logs easier to parse with Fluentd. I committed my changes:

git add docker-compose.yml fluent.conf app/app.py
git commit -m "Add advanced logging with rotation and JSON"
git push origin main

Step 3: Testing the Setup

Time to see it in action! I started the dev profile:

docker compose --profile dev up -d --build --wait

Generated some logs:

for i in {1..10}; do curl http://localhost:8080; done

Checked Fluentd logs:

docker compose logs fluentd

Boom! JSON-formatted logs appeared, neat and tidy. I inspected Fluentd’s log files:

docker compose exec fluentd ls /fluentd/log

Saw app.log.*.gz files—compressed and rotated as planned. For the prod profile:

docker compose --profile prod up -d --build
docker compose logs web-prod

The logs were JSON-formatted and rotated at 10MB. I cleaned up with ./cleanup.sh and jotted a 50-word summary in day_23_notes.txt: “Advanced logging saves disk space with rotation, enables analytics with JSON, and centralizes logs via Fluentd. It’s scalable, efficient, and perfect for debugging multi-container apps.”

Step 4: Logging Commands Practice

Next, I practiced Docker Compose logging commands. I restarted the dev profile:

docker compose --profile dev up -d

Tailed Fluentd logs:

docker compose logs fluentd --follow

Hit http://localhost:8080 to generate logs and watched them stream in real time. I checked the logging config:

docker compose config | grep -A 5 logging

This confirmed my fluentd and json-file setups. For web-prod, I inspected log files:

docker compose --profile prod up -d
docker compose exec web-prod ls /var/log

Grok helped clarify: I asked, “Explain Docker Compose logging commands in 100 words.” Grok replied that docker compose logs displays service logs, --follow streams them, and config | grep logging shows configurations. Simple yet powerful!

Step 5: Grok DeepSearch and Think Mode

I crafted some Grok queries to deepen my understanding:

  • “What are 2025 best practices for Docker Compose advanced logging? (150 words)” Grok suggested centralized logging with Fluentd, JSON formatting, and rotation for disk efficiency—aligned with my setup!

  • “How does Fluentd optimize Docker logging? (100 words)” Grok explained Fluentd’s buffering, async logging, and scalability, perfect for my app. In think mode, I asked, “How does Docker Compose handle advanced logging step-by-step?” Grok outlined configuring drivers, applying settings, and monitoring—super clear!

Step 6: Mini-Project: Logging Overload

For fun, I simulated a logging overload by modifying app/app.py:

for i in range(100):
    logger.info(f"Log overload test {i}")

Ran it:

docker compose --profile dev up -d --build
curl http://localhost:8080
docker compose logs fluentd

Fluentd handled the flood like a champ! I reverted the code, committed it

0
Subscribe to my newsletter

Read articles from Usman Jap directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Usman Jap
Usman Jap