Volumes and Networks for DevOps Engineers

Mastering Docker Compose: Volumes and Networks for DevOps Engineers
Docker is a powerful tool that has transformed the way we build, deploy, and manage applications. For DevOps engineers, understanding Docker is essential, and today we’re going to explore two crucial concepts: Docker Volumes and Docker Networks. These allow containers to store data persistently and communicate effectively in a multi-container environment.
In this blog, we’ll walk through hands-on tasks that involve creating multi-container applications, sharing data between containers using volumes, and scaling services with Docker Compose. Let’s dive right in!
What are Docker Volumes and Networks?
Before we get to the tasks, let’s take a moment to understand what Docker Volumes and Networks are and why they matter in a DevOps workflow.
Docker Volume: A volume in Docker is a mechanism for containers to store and share data outside of their own filesystem. Volumes ensure that data persists even after containers are stopped or removed, making them ideal for storing databases or files that need to outlive the container’s lifecycle.
Docker Network: Docker’s networking feature allows containers to communicate with each other and the outside world. By creating virtual networks, you can group containers together so they can interact securely and efficiently.
Task 1: Multi-Container Docker Compose Setup
One of Docker’s strengths is managing multi-container applications using Docker Compose. Imagine you need to deploy both an application and a database in separate containers but want to bring them up together. Here’s how you can do it with a single command using a docker-compose.yml
file.
Step 1: Creating the docker-compose.yml
File
version: '3'
services:
app:
image: my_app_image # Replace with your application image
ports:
- "5000:5000" # Mapping host port to container port
volumes:
- app_data:/usr/src/app
depends_on:
- db # App depends on the database service
db:
image: postgres:latest # Using PostgreSQL as an example
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydatabase
volumes:
- db_data:/var/lib/postgresql/data # Persistent data storage for DB
volumes:
app_data: # Volume for application
db_data: # Volume for database
This YAML file defines two services:
App: The application container, linked to the database.
DB: The database container (PostgreSQL in this example), using a volume for persistent data storage.
Step 2: Managing the Multi-Container Application
- Starting the containers: Run the following command to start both containers in the background:
docker-compose up -d
This command spins up the containers in detached mode, meaning they run in the background.
Scaling the application: Need more instances of the app? Scale it with this command:
docker-compose up --scale app=3 -d
This command launches 3 replicas of the app container.
- Viewing container status: Check the status of your containers with:
docker-compose ps
- Viewing logs: To troubleshoot or monitor a specific service, use:
docker-compose logs app
- Stopping and removing containers: When you’re done, you can stop and remove all containers, networks, and volumes with:
docker-compose down
Task 2: Sharing Data Between Containers Using Docker Volumes
In some scenarios, you might need multiple containers to share the same data. Docker Volumes make this possible by allowing containers to read from and write to the same storage area. Here’s how to set it up.
Step 1: Create Containers with Shared Volumes
We’ll create two containers that share a volume using the --mount
flag.
# Create first container and mount the volume
docker run -d --name container1 --mount source=shared_volume,target=/data busybox
# Create second container and mount the same volume
docker run -d --name container2 --mount source=shared_volume,target=/data busybox
Here, we’ve created two containers, both using the shared_volume
. Any data written by one container will be accessible to the other.
Step 2: Write Data in One Container
Let’s write some data to the shared volume from the first container:
docker exec container1 sh -c "echo 'Hello from container1' > /data/file.txt"
**
Step 3: Verify Data in the Second Container**
Now, let’s verify that the second container can read the data written by the first:
docker exec container2 cat /data/file.txt
The output should display:
Hello from container1
This proves that the volume is successfully shared between the two containers!
Step 4: Listing and Removing Volumes
To view all the volumes created, use:
docker volume ls
Once you're done, clean up by removing the shared volume:
docker volume rm shared_volume
Why Volumes and Networks Matter for DevOps
In a DevOps workflow, managing the infrastructure and ensuring seamless communication between services is critical. Docker Volumes help ensure data persistence and allow different containers to share data, while Docker Networks enable secure communication between containers in a virtual environment. By using Docker Compose, we can define these relationships in one place and manage multi-container setups with ease.
Final Thoughts
Docker Volumes and Networks are essential tools for any DevOps engineer looking to efficiently manage containerized applications. Whether you’re running multiple microservices or just need to store persistent data, mastering these concepts will make your deployments more reliable and scalable.
We hope this blog helped you get hands-on with Docker Volumes and Networks! Keep experimenting and exploring—there’s so much more you can do with Docker.
Happy Dockerizing! 😃
Subscribe to my newsletter
Read articles from Ammar Khan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
