Volumes vs Bind Mounts in Docker: A Practical Guide

Abhishek BalajiAbhishek Balaji
8 min read

What is a Docker Volume?

A Docker volume is a specially managed storage space created and managed by Docker to persist data outside of the container's writable layer. Unlike data inside a container that gets lost when the container is removed, data in volumes persists independently of the container lifecycle.

Why do we use Docker Volumes?

  1. Data Persistence
    Containers are ephemeral by nature — when you delete or recreate a container, any data inside it is lost. Volumes keep your data safe and persistent even if the container is removed or updated.

  2. Data Sharing
    Volumes allow multiple containers to share and access the same data, useful in scenarios like running web servers and application containers accessing the same files.

  3. Decoupling Data from Containers
    By storing data outside the container filesystem, it’s easier to update or replace containers without affecting the stored data.

  4. Performance & Management
    Volumes are optimized by Docker for performance and are easier to back up, migrate, or manage compared to bind mounts.

Now, I will walk you through a detailed, step-by-step guide on how to create a Docker volume, attach it to a Docker container, create data inside the volume, and verify the data persistence on your system. This guide will include all necessary commands along with screenshots for clarity. Let’s get started!

I have accessed my virtual machine named “ubuntu” using VMware Workstation.

I have already installed docker package on my system. To check the docker version installed. Use the command

docker --version

If you haven't installed Docker on your Ubuntu system yet, you can easily do so by running the following commands. First, update your package list:

sudo apt update -y

Then, install the Docker package:

sudo apt install docker -y

These commands will ensure Docker is installed and ready to use on your machine.

For other Linux distributions:

CentOS / RHEL:

sudo yum update -y
sudo yum install docker -y

Fedora:

sudo dnf update -y
sudo dnf install docker -y

Debian:

sudo apt update -y
sudo apt install docker.io -y

Step-1: Create a docker volume.

docker volume create <volume-name>

To check existing docker volumes, use the docker command

docker volume ls

I currently have only a personal Docker volume set up, without any containers or other Docker resources.

My Docker volume has been created successfully. Next, let’s pull an image from Docker Hub.

If you don’t have one already, please create an account on Docker Hub.

Once logged in, use the search bar to find the nginx image. Click on the latest tag, and then copy the command shown on the right side — it should look like this: docker pull nginx:latest

docker pull nginx

I will be using the nginx image for this demonstration, but you can use any Docker image you prefer — the process remains the same.

On the right-hand side of your screen, you’ll see the docker pull nginx command. Please copy it and paste it into your terminal, then press Enter. Docker will start downloading the image from the remote repository to your local machine’s Docker environment.

I have already downloaded the image to my Docker environment. To verify that it was successfully downloaded, use the following command:

docker images

This Docker command will display the list of all Docker images downloaded to your local Docker environment.

Now, we will mount the Docker volume to a directory inside the container.

Note: Docker volumes must be specified at the time of container creation — you cannot attach a volume to a running container.

Use the following command

docker run -it --name <container-name> -v <docker-volume>:/directory <image-name>:<tag>

In this scenario, the code will be

docker run -it --name test1 -v developer_volume:/user-data nginx:latest
  • docker run: Create and start a container

  • -it: Run interactively with a terminal

  • --name test1: Name the container "test1"

  • -v developer_volume:/user-data: Mount the Docker volume developer_volume to /user-data inside the container

  • nginx:latest: Use the latest version of the nginx image

To view the actively running Docker containers, use the following command:

docker ps

If your container has exited and you want to see it, use the following Docker command:

docker ps -a

This command displays a list of all containers — whether they are running, stopped, or have exited.

To restart your Docker container, use the following command:

docker start <container-name>

# In this case, it will be
docker start test1

Now, let’s access the container’s shell and navigate to the user-data directory. Inside, create some data files, then exit the container. Afterwards, we will verify that the files exist on the system.

To enter the container’s shell, use the following command:

docker exec -it <container-name> /bin/bash

# In my case, it will be
docker exec -it test1 /bin/bash

I have logged in as the root user. Now, to list all files and directories in the current location, use the command:

ls -l

Change directory into user-data and let’s create some data files there:

cd user-data

I created a dummy file named data.txt and added the line:

You can view the content using:

cat <file-name>

# In my case, it will be
cat data.txt

Now, exit the Docker container by typing:

exit

To verify the data stored in your Docker volume on the host system, navigate to the volume’s data directory:

cd /var/lib/docker/volumes/<volume-name>/_data

# In my case, 
cd /var/lib/docker/volumes/developer_volume/_data

Before enter this command, it is better to turn into an admin. To enter docker directory, you need to be an admin user or root user

sudo su

Now, paste the path

cd /var/lib/docker/volumes/developer_volume/_data

As you can see, the data is reflected here, confirming that the Docker volume successfully stores the files outside the container.

It is considered good practice to use Docker volumes instead of bind mounts because volumes are managed and secured by Docker, providing better safety and portability.

Bind mounts

Now that we’ve covered Docker volumes, let’s move on to the next topic: bind mounts.

First create a local directory within your local machine. Use the command

mkdir <folder-name>

# In my case,
mkdir local-test

Now, go ahead and create a new container by using the command

docker run -it --name <container-name> -v <local-path>:<directory> <image>:<tag>
  • <local-path>: path on your host machine

  • <directory>: path inside the container

  • <container-name>, <image>, <tag> as usual

docker run -it --name c1 -v /home/abhishek-balaji/local-test:/company-data nginx:latest
  • Runs an interactive container named c1

  • Mounts your local directory /home/abhishek-balaji/local-test to /company-data inside the container

  • Uses the nginx:latest image

To view the current path of your local directory, use the command:

pwd

To view the Docker containers that are actively running, use the command:

docker ps

If your Docker container has stopped or exited, use the following command to see all containers:

docker ps -a

To enter the Docker container’s shell, use the following command:

docker exec -it <container-name> /bin/bash

# In my case, it will be
docker exec -it c1 /bin/bash

Now, you can exit the Docker container by typing:

exit

To list the files and directories in the current directory with detailed information, use the command:

 ls -l

Yes, as you can see, the files are reflected in the local directory on your system, confirming that the Docker volume or bind mount is working correctly.

Conclusion: Docker Volumes vs Bind Mounts

Both Docker volumes and bind mounts allow you to persist data generated by containers, but they serve different use cases and come with distinct advantages:

  • Docker Volumes

    • Managed by Docker and stored in Docker’s storage area (/var/lib/docker/volumes).

    • Provide better isolation from the host system, improving security and portability.

    • Recommended for most production environments where data integrity and container portability are important.

    • Easy to back up, migrate, and share across multiple containers.

  • Bind Mounts

    • Directly mount a directory or file from the host machine into the container.

    • Useful for development scenarios where you want real-time access to source code or files on your host system.

    • Provide more control over file locations but can introduce security risks and dependency on the host’s filesystem structure.

    • Less portable since they rely on host paths.

When to Use Each

  • Use Docker volumes for persistent data that needs to be managed by Docker, especially in production, for databases, logs, or application data.

  • Use bind mounts mainly during development for live code updates or when you need to work with files that exist on your host.

Best Practices

  • Prefer Docker volumes for safer, more portable, and manageable data storage.

  • Avoid bind mounts for production workloads due to security and portability concerns.

  • Always define volumes and mounts explicitly during container creation for clarity and consistency.

  • Regularly back up important volumes to prevent data loss.

Thank you for following along this guide on Docker volumes and bind mounts. Understanding how to manage persistent data in Docker is crucial for building reliable and efficient containerized applications. By choosing the right method—volumes for production-grade data management and bind mounts for flexible development workflows—you can ensure your applications run smoothly and securely. Keep practicing these techniques, and you’ll be well on your way to mastering Docker’s powerful data management features.

0
Subscribe to my newsletter

Read articles from Abhishek Balaji directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abhishek Balaji
Abhishek Balaji

I'm a cloud and DevOps upskilling candidate focused on building practical skills through real-world AWS projects. I enjoy getting hands-on with core services like EC2, EBS, S3, and IAM, and documenting my learning journey through blog posts and GitHub repositories. Every project I complete is a step toward mastering cloud fundamentals and developing automation skills that align with DevOps practices. My goal is to grow into a confident, capable engineer who can design and manage scalable infrastructure. GitHub: https://github.com/abhishek-balaji-2025