Day 9: Docker Networking, Volumes, and Compose

Table of contents
- Learning points:
- ✅ Challenge 1: Create a custom bridge network and connect two containers (alpine and nginx).
- ✅ Challenge 2: Run a MySQL container with a named volume (mysql_data) and confirm data persistence after container restart.
- ✅ Challenge 3: Create a docker-compose.yml file to launch a Flask API and a PostgreSQL database together.
- ✅ Challenge 4: Implement environment variables in your Docker Compose file to configure database credentials securely.
- ✅ Challenge 5: Deploy a WordPress site using Docker Compose (WordPress + MySQL).
- ✅ Challenge 6: Create a multi-container setup where an Nginx container acts as a reverse proxy for a backend service.
- ✅ Challenge 7: Set up a network alias for a database container and connect an application container to it.
- ✅ Challenge 8: Use docker inspect to check the assigned IP address of a running container and communicate with it manually.
- ✅ Challenge 9: Deploy a Redis-based caching system using Docker Compose with a Python or Node.js app
- ✅ Challenge 10: Implement container health checks in Docker Compose (healthcheck: section).

Learning points:
🔹 Docker Networking – Different network types (
bridge
,host
,overlay
), container communication, and linking containers.
🔹 Docker Volumes – Using volumes and bind mounts to persist data inside containers.
🔹 Docker Compose – Managing multi-container applications withdocker-compose.yml
.
🔹 Port Mapping & Exposing Services – Connecting applications running in different containers.
🔹 Persistent Storage – Ensuring database and application data survive container restarts.
🔹 Service Scaling – Running multiple replicas of a service using Compose.LEARN
Mastering Docker: A Comprehensive Guide from Basics to Advanced by Sandip Das - Click Here
Initial Tasks:
✅ Task 1: Inspect the default Docker network and list all networks:
docker network ls
✅ Task 2: Install Docker Compose - Guide
In the modern cloud-native world, understanding Docker networking is crucial for building robust containerized applications. This guide will walk you through 10 practical challenges designed to strengthen your Docker networking skills.
Each challenge is presented with:
🎯 Objective: What we're trying to achieve
🛠️ Solution: Step-by-step commands with explanations
💡 Key Takeaway: Important concepts to remember
🔍 Verification: How to confirm it worked correctly
Let's dive in!
✅ Challenge 1: Create a custom bridge network and connect two containers (alpine
and nginx
).
🎯 Objective: Create a custom bridge network and connect two containers (
alpine
andnginx
).
🛠️ Solution:
# Step 1: Create a custom bridge network
docker network create my-custom-network
# Step 2: Run an Nginx container on the custom network
docker run -d --name nginx-container --network my-custom-network nginx
# Step 3: Run an Alpine container on the same network
docker run -it --rm --name alpine-container --network my-custom-network alpine sh
# Step 4: Test connectivity from Alpine to Nginx (inside Alpine container)
apk add --no-cache curl
curl nginx-container
🔍 Verification: You should see the Nginx welcome page HTML returned from the curl
command, confirming container-to-container communication via DNS resolution.
💡 Key Takeaway: Custom bridge networks in Docker provide:
Automatic DNS resolution between containers
Isolated network segments for better security
Built-in service discovery using container names
✅ Challenge 2: Run a MySQL container with a named volume (mysql_data
) and confirm data persistence after container restart.
🎯 Objective: Run a MySQL container with a named volume (
mysql_data
) and confirm data persistence after container restart.
🛠️ Solution:
# Step 1: Create a named volume
docker volume create mysql_data
# Step 2: Run MySQL with the named volume
docker run -d \
--name mysql-db \
-e MYSQL_ROOT_PASSWORD=mysecretpassword \
-e MYSQL_DATABASE=testdb \
-v mysql_data:/var/lib/mysql \
mysql:8.0
# Step 3: Connect to MySQL and create test data
docker exec -it mysql-db mysql -uroot -pmysecretpassword
Inside MySQL shell:
USE testdb;
CREATE TABLE test_table (id INT, name VARCHAR(50));
INSERT INTO test_table VALUES (1, 'persistence test');
SELECT * FROM test_table;
EXIT;
# Step 4: Stop and remove the container (but keep the volume)
docker stop mysql-db
docker rm mysql-db
# Step 5: Create a new container using the same volume
docker run -d \
--name mysql-db-new \
-e MYSQL_ROOT_PASSWORD=mysecretpassword \
-v mysql_data:/var/lib/mysql \
mysql:8.0
# Step 6: Verify data persistence
docker exec -it mysql-db-new mysql -uroot -pmysecretpassword -e "USE testdb; SELECT * FROM test_table;"
🔍 Verification: You should see the previously inserted row (1, 'persistence test')
, confirming data persistence.
💡 Key Takeaway: Named volumes:
Persist data independently from container lifecycle
Can be reused across different containers
Are managed by Docker for better portability
✅ Challenge 3: Create a docker-compose.yml
file to launch a Flask API and a PostgreSQL database together.
🎯 Objective: Create a
docker-compose.yml
file to launch a Flask API and a PostgreSQL database together.
🛠️ Solution:
Create a simple Flask application first (in a file called app.py
):
from flask import Flask, jsonify
import os
import psycopg2
app = Flask(__name__)
@app.route('/')
def hello():
return jsonify({"message": "Hello from Flask!"})
@app.route('/db-test')
def db_test():
conn = psycopg2.connect(
host=os.environ.get('DB_HOST'),
database=os.environ.get('DB_NAME'),
user=os.environ.get('DB_USER'),
password=os.environ.get('DB_PASSWORD')
)
cur = conn.cursor()
cur.execute('SELECT 1')
result = cur.fetchone()[0]
cur.close()
conn.close()
return jsonify({"db_connection": "successful", "result": result})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Create requirements.txt:
flask==2.0.1
psycopg2-binary==2.9.1
Create Dockerfile:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 5000
CMD ["python", "app.py"]
Create docker-compose.yml:
version: '3'
services:
api:
build: .
ports:
- "5000:5000"
environment:
- DB_HOST=db
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=postgres
depends_on:
- db
networks:
- app-network
db:
image: postgres:13
environment:
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
postgres_data:
Launch the application:
docker-compose up -d
🔍 Verification: Access http://localhost:5000/
to see the Flask welcome message and http://localhost:5000/db-test
to confirm database connectivity.
💡 Key Takeaway: Docker Compose:
Defines multi-container applications in a single file
Automatically creates a default network for all services
Manages container dependencies with
depends_on
✅ Challenge 4: Implement environment variables in your Docker Compose file to configure database credentials securely.
🎯 Objective: Implement environment variables in your Docker Compose file to configure database credentials securely.
🛠️ Solution:
Create a .env
file:
DB_NAME=myapp
DB_USER=appuser
DB_PASSWORD=supersecretpassword
POSTGRES_PASSWORD=supersecretpassword
Update docker-compose.yml:
version: '3'
services:
api:
build: .
ports:
- "5000:5000"
environment:
- DB_HOST=db
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
depends_on:
- db
networks:
- app-network
db:
image: postgres:13
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${DB_NAME}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
postgres_data:
Run the application:
docker-compose up -d
🔍 Verification: Check environment variables inside containers:
docker-compose exec api env | grep DB_
docker-compose exec db env | grep POSTGRES_
💡 Key Takeaway: Environment variables:
Keep sensitive information out of version control
Allow different configurations without changing code
Provide a standardized way to configure containerized applications
✅ Challenge 5: Deploy a WordPress site using Docker Compose (WordPress + MySQL).
🎯 Objective: Deploy a WordPress site using Docker Compose (WordPress + MySQL).
🛠️ Solution:
Create a docker-compose.yml file:
version: '3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wordpress_net
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8080:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress_data:/var/www/html
networks:
- wordpress_net
networks:
wordpress_net:
volumes:
db_data:
wordpress_data:
Start the application:
docker-compose up -d
🔍 Verification: Access WordPress at http://localhost:8080 and complete the installation wizard.
💡 Key Takeaway: Docker Compose makes deploying complex applications simple by:
Coordinating dependent services
Managing shared networks
Handling data persistence with volumes
✅ Challenge 6: Create a multi-container setup where an Nginx container acts as a reverse proxy for a backend service.
🎯 Objective: Create a multi-container setup where an Nginx container acts as a reverse proxy for a backend service.
🛠️ Solution:
Create a simple backend service (app.py):
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return "<h1>Hello from Backend Service</h1>"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Create a Dockerfile for the backend:
FROM python:3.9-slim
WORKDIR /app
RUN pip install flask
COPY app.py .
EXPOSE 5000
CMD ["python", "app.py"]
Create nginx.conf:
server {
listen 80;
location / {
proxy_pass http://backend:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Create docker-compose.yml:
version: '3'
services:
backend:
build: .
networks:
- app_network
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- backend
networks:
- app_network
networks:
app_network:
Start the application:
docker-compose up -d
🔍 Verification: Access http://localhost to see the backend service response.
💡 Key Takeaway: Reverse proxy configuration:
Allows internal service discovery via container names
Provides a unified entry point for multiple services
Can add SSL termination, load balancing, and caching
✅ Challenge 7: Set up a network alias for a database container and connect an application container to it.
🎯 Objective: Set up a network alias for a database container and connect an application container to it.
🛠️ Solution:
Create docker-compose.yml:
version: '3'
services:
app:
image: alpine
command: sh -c "apk add --no-cache curl && while true; do curl -s primary-db:3306 || curl -s replica-db:3306; sleep 5; done"
networks:
app_net:
database:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: example
networks:
app_net:
aliases:
- primary-db
- replica-db
networks:
app_net:
Start the containers:
docker-compose up -d
🔍 Verification: Check logs to see connection attempts:
docker-compose logs app
💡 Key Takeaway: Network aliases provide:
Multiple DNS names for the same container
Flexibility for service migration and replacement
Support for legacy applications with hardcoded hostnames
✅ Challenge 8: Use docker inspect
to check the assigned IP address of a running container and communicate with it manually.
🎯 Objective: Use
docker inspect
to check the assigned IP address of a running container and communicate with it manually.
🛠️ Solution:
# Step 1: Run a simple web server
docker run -d --name webserver nginx
# Step 2: Get the container's IP address
CONTAINER_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' webserver)
echo "Container IP: $CONTAINER_IP"
# Step 3: Run another container and communicate with the web server by IP
docker run --rm alpine sh -c "apk add --no-cache curl && curl -s $CONTAINER_IP"
🔍 Verification: You should see the Nginx welcome page HTML from the curl command.
💡 Key Takeaway: docker inspect
:
Reveals detailed container information including network details
Allows manual communication between containers when DNS isn't available
Works across different networks when combined with network inspection
✅ Challenge 9: Deploy a Redis-based caching system using Docker Compose with a Python or Node.js app
🎯 Objective: Deploy a Redis-based caching system using Docker Compose with a Python app.
🛠️ Solution:
Create app.py:
from flask import Flask, jsonify
import redis
import time
import os
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)
def get_hit_count():
retries = 5
while True:
try:
return cache.incr('hits')
except redis.exceptions.ConnectionError as e:
if retries == 0:
raise e
retries -= 1
time.sleep(0.5)
@app.route('/')
def hello():
count = get_hit_count()
return jsonify({
'message': 'Hello from Flask!',
'cache_hits': count,
'cache_status': 'connected'
})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Create requirements.txt:
flask==2.0.1
redis==3.5.3
Create Dockerfile:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 5000
CMD ["python", "app.py"]
Create docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
depends_on:
- redis
networks:
- app-network
redis:
image: redis:alpine
volumes:
- redis_data:/data
networks:
- app-network
networks:
app-network:
volumes:
redis_data:
Start the application:
docker-compose up -d
🔍 Verification: Visit http://localhost:5000 multiple times to see the hit counter increment, confirming Redis is working.
💡 Key Takeaway: Redis caching:
Provides fast in-memory data access
Works seamlessly with containerized applications
Benefits from Docker volume persistence
✅ Challenge 10: Implement container health checks in Docker Compose (healthcheck:
section).
🎯 Objective: Implement container health checks in Docker Compose (
healthcheck:
section).
🛠️ Solution:
Create docker-compose.yml with health checks:
version: '3'
services:
web:
image: nginx:latest
ports:
- "80:80"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- app-network
api:
build: .
depends_on:
db:
condition: service_healthy
ports:
- "5000:5000"
networks:
- app-network
db:
image: postgres:13
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=app
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
networks:
- app-network
networks:
app-network:
volumes:
postgres_data:
Start the application:
docker-compose up -d
Check health status:
docker-compose ps
docker inspect --format='{{json .State.Health.Status}}' $(docker-compose ps -q db)
🔍 Verification: The docker inspect
command should show "healthy" when the database is ready, and the API container will only start after the database's health check passes.
💡 Key Takeaway: Health checks:
Ensure containers are actually ready, not just running
Allow
depends_on
to wait for actual service availabilityProvide automatic recovery through Docker's restart policies
Can use different strategies (HTTP, CMD, custom scripts) for different services
Make multi-container applications more resilient
Subscribe to my newsletter
Read articles from Hari Kiran B directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
