PostgreSQL Backup in Docker: How I Automated `pg_dump` the Safe Way

TechDave3573TechDave3573
2 min read

If you're running PostgreSQL inside a Docker container, you’ve probably wondered:
“How do I back this up safely without stopping my service?”

I’ve been there.
And after trying a few different methods, I settled on something that works without disrupting the container at all — and still gives me full control of the backup output.


🧠 The Real-World Problem

When PostgreSQL is containerized:

  • It's running 24/7 inside Docker.

  • The data is mounted via Docker Volume.

  • You can’t easily extract files from inside the container.

At first, I thought I could just docker stop and tar the pg_data folder.
But on a live production server? That’s a no-go.

So I turned to pg_dump — PostgreSQL’s standard logical backup tool.
It works great… but then comes the real question:

Where do you store the backup file so it’s accessible from outside the container?


✅ My Strategy in Two Steps

  1. Run pg_dump inside the container

  2. Mount a shared directory from host to container

This lets the container write the backup file to a location the host machine can immediately access.


🔧 Setup Guide

Step 1: Update your docker-compose.yml

services:
  postgres:
    image: postgres:15
    container_name: my_postgres
    volumes:
      - pg_data:/var/lib/postgresql/data
      - /home/backup:/backup  # 🔥 Add this line

Now, /backup inside the container maps to /home/backup on the host.
So your backup lands directly on your server's filesystem.


Step 2: Bash Script for Backup Automation

#!/bin/bash

CONTAINER_NAME="my_postgres"
DB_NAME="your_db"
DB_USER="postgres"
BACKUP_DIR="/home/backup"
DATE=$(date +"%Y%m%d_%H%M")
FILENAME="pg_backup_$DATE.sql"

docker exec -e PGPASSWORD='YourPassword' $CONTAINER_NAME \
  pg_dump -U $DB_USER $DB_NAME > "$BACKUP_DIR/$FILENAME"

Tip: Automate this script using crontab for daily or weekly backups.
Schedule during off-hours to minimize CPU impact.


🔒 Best Practices & Gotchas

  • Avoid using PGPASSWORD directly in scripts. Use .pgpass or a secrets manager like HashiCorp Vault.

  • Remember: pg_dump is a logical backup. It’s great for disaster recovery, not full physical replication.

  • Large tables? Offload backups to low-traffic hours. pg_dump can spike CPU usage.


🎯 Why This Approach Works

  • No need to pause or restart the container.

  • The backup is stored directly on the host.

  • Easily compress, upload to cloud, or sync to external storage.

It’s not perfect. But in production?
A backup that runs safely and consistently is better than a perfect one that’s fragile.

This method has kept my data safe, my service running, and my ops team stress-free.

0
Subscribe to my newsletter

Read articles from TechDave3573 directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

TechDave3573
TechDave3573