Docker Best Practices and Anti-Patterns

UmairUmair
10 min read

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow developers to package an application with all of its dependencies into a standardized unit for software development. However, to get the most out of Docker, it's important to follow best practices and avoid common pitfalls, known as anti-patterns.

Docker Best Practices

Use a .dockerignore File

Just like a .gitignore file, a .dockerignore file helps you to prevent unnecessary files from being added to the container. This can significantly reduce the build time and the size of the image.

Create a .dockerignore file in the same directory as your Dockerfile with the following content:

node_modules
npm-debug.log

This will ignore the node_modules and npm-debug.log directories.

Minimize the Number of Layers

Each instruction in the Dockerfile adds a new layer to the image increasing the size of the Docker image. By minimizing the number of layers, you can reduce the overall size of the image.

Instead of writing each instruction separately with the RUN instruction as shown below.

RUN apt-get update
RUN apt-get install -y package-1
RUN apt-get install -y package-2

You can run multiple instructions with && symbol. Use \ as a line separator.

RUN apt-get update && apt-get install -y \
    package-1 \
    package-2

Use Multi-Stage Builds

Multi-stage builds allow you to use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.

# First stage
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html  
COPY app.go    .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

# Second stage
FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]

In this Dockerfile, the first stage uses a Golang base image and builds an application. The second stage uses an alpine base image and copies only the compiled application from the first stage.

Use Specific Tags for Images

It's important to use specific tags instead of the latest tag, to ensure that your builds are deterministic. This way, you can avoid situations where different members of your team are using different versions of the same image.

Instead of using:

FROM debian:latest

You should use:

FROM debian:10.1

Don't Run Processes as Root

By default, Docker runs container processes as root. This is a security risk and should be avoided. In Dockerfile, you can specify a non-root user like this:

RUN groupadd -r app && useradd -r -g app app
USER app

Use COPY Instead of ADD

While both COPY and ADD instructions copy files from the host machine into the Docker image, they function slightly differently. COPY just copies the files, whereas ADD has some extra features like local-only tar extraction and remote URL support. These features often lead to unexpected results and hence it's best to use COPY.

Instead of using:

ADD source_directory /destination_directory

You should use:

COPY source_directory /destination_directory

Keep Your Images Updated

Ensure you regularly update the packages within your Docker images to reduce the potential attack surface of your containers, due to outdated packages that might have vulnerabilities.

In your Dockerfile, you can update packages like this:

RUN apt-get update && apt-get upgrade -y

Use Official Images

When possible, it's better to use official images because they are maintained by dedicated teams and the Docker community. They are usually optimized and secure.

Instead of using:

FROM some-unknown-image

You should use:

FROM python:3.8-slim

Always Define a Health Check

A health check instruction in your Dockerfile will help Docker to understand the status of your app inside the container. It is useful for auto-healing and orchestration.

HEALTHCHECK --interval=5m --timeout=3s \
  CMD curl -f http://localhost/ || exit 1

Don’t Store Data in Containers

Containers are meant to be ephemeral and disposable, which means that data stored in a container will be lost when the container is removed. You should use Docker volumes or bind mounts to store data.

docker run -d --name mysql-db -v /data/mysql:/var/lib/mysql mysql

Use Environment Variables for Configuration

Docker allows you to pass configuration details as environment variables. This allows you to keep your containers environment agnostic.

docker run -d --name my-app -e "APP_ENV=production" my-app-image

Label Your Images

Labels are a great way to organize Docker images. They can contain arbitrary metadata information and can be used in combination with docker CLI filtering.

Example:

LABEL version="1.0"
LABEL description="This is an example image"

One Process Per Container

This principle is part of the microservices approach. Keeping each container to a single process will make it easier to scale and reuse containers.

You should avoid:

CMD service1 && service2

Instead, you should do:

CMD ["service1"]

Log to Stdout and Stderr

Containers should not store or manage log files. Instead, just log to stdout and stderr and let the Docker daemon handle the storage.

In your application code, you should do:

import sys
print("This is a log message", file=sys.stderr)

Use Build-time Arguments

Docker allows you to use build-time arguments that can be used during the build of the Docker image.

ARG version
FROM busybox:$version

And then build the Docker image with the build-arg option:

docker build --build-arg version=1.30.1 .

Avoid Unnecessary Packages

Keep your Docker images as lean as possible by only installing the necessary packages. Avoid doing the following:

RUN apt-get install -y package1 package2 package3 package4

Instead, do this:

RUN apt-get install -y package1 package2

Be Mindful of the Cache

Docker uses a cache mechanism when building images. If a layer hasn’t changed, Docker will reuse it from the cache. This means you should order the Dockerfile instructions from the least likely to change to the most likely to change.

Instead of writing:

ADD . /app
RUN pip install -r requirements.txt

You should write:

ADD requirements.txt .
RUN pip install -r requirements.txt
ADD . /app

Use Docker's Content Trust

Docker Content Trust is a security feature that provides the ability to use digital signatures for data sent to and received from remote Docker registries. This feature ensures the integrity and publisher of images.

To enable Docker Content Trust:

export DOCKER_CONTENT_TRUST=1

Clean-up After Installation

Do not leave the mess in your docker image, clean up after installing packages to reduce the size of the image.

RUN apt-get update && apt-get install -y package-1 package-2 && \
    rm -rf /var/lib/apt/lists/*

Use docker system prune Regularly

This command will remove all stopped containers, all networks not used by at least one container, all dangling images, and build cache. This helps you to save space and keep your Docker environment clean.

docker system prune -a

Utilize Docker Secrets

Docker Secrets is a secure way to store sensitive data like passwords and API keys.

echo "This is a secret" | docker secret create my_secret_data -

Define Resource Limits

Containers sometimes can grab all the available resources, starving other containers or applications. Defining resource limits helps to prevent a single container from consuming all the resources of the host.

docker run -it --cpus=".5" --memory="512m" ubuntu

Use Lightweight Base Images

Where possible use lightweight base images like alpine. It can drastically reduce the size of your final Docker images.

FROM alpine:latest

Regularly Update Docker Version

Regularly updating Docker ensures that you have the latest features and security fixes.

sudo apt-get update
sudo apt-get upgrade docker-ce

Shared Mounts for Container Communication

If multiple containers need to access the same data, use shared mounts.

docker run -d -v shared_data:/data container1
docker run -d -v shared_data:/data container2

Docker Anti-Patterns

Running Containers With the :latest Tag

The :latest tag is applied to the latest build pushed to a repository, but it doesn't necessarily mean the most 'stable' or 'tested' build. It's better to use version-specific tags.

docker run ubuntu:latest

Using a Single Layer

Bundling all of your application code and dependencies into a single layer will lead to larger image sizes and longer build times. It's better to separate your application into multiple layers.

FROM ubuntu:latest
ADD . /app
RUN apt-get update && apt-get install -y python3 python3-pip && \
    pip3 install -r /app/requirements.txt

Running Everything as Root

Running containers as root can be a security risk because a process can gain unrestricted access to the host. It's better to run containers as non-root users.

FROM ubuntu:latest
USER root
CMD ["do-something"]

Not Using .dockerignore Files

Not using .dockerignore files will increase build context size and may include sensitive data in the image. It's better to include a .dockerignore file in your project.

Using Mutable Tags

Using mutable tags (like :latest) in your Dockerfile can lead to inconsistent builds. It's better to use immutable tags.

FROM ubuntu:latest

Installing Unnecessary Packages

Installing unnecessary packages will lead to larger image sizes and may increase the attack surface of your containers. It's better to only install the necessary packages.

FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl wget vim emacs

Not Removing Cache

Not removing cache files after installing packages will lead to larger image sizes. It's better to clean up after installing packages.

FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl

Not Specifying Version in FROM instruction

Not specifying a version in the FROM instruction can lead to inconsistent builds. It's better to specify a version.

FROM ubuntu

Ignoring Failed Health Checks

Ignoring failed health checks can lead to running unhealthy containers. It's better to act on failed health checks. Ignoring the output of:

docker inspect --format='{{json .State.Health.Status}}' my_container

Leaving Container Ports Open

Leaving ports open can expose your application to unnecessary risk. Always specify the ports you want to expose.

docker run -d -p 80:80 my_image

Using SSH to Connect to Containers

SSH is not necessary for connecting to Docker containers. Use docker exec instead.

ssh root@container_ip

Relying on IP Addresses for Service Discovery

Containers are ephemeral and their IP addresses can change. It's better to use Docker's DNS service discovery features instead of relying on the IP address of a container for communication.

Hardcoding Configuration in Dockerfile

Hardcoding configuration values in the Dockerfile makes your application less flexible and harder to maintain. Use environment variables instead.

ENV API_KEY = "your-api-key"

Building Images on Production Servers

Building images on production servers can affect the performance of your application. It's better to build images in a CI/CD pipeline and then deploy them to production.

Running docker build on a production server.

Running Containers Without Restart Policies

Without a restart policy, containers might not restart after a failure or after the host system restarts. Use a restart policy to ensure your containers are always running.

docker run my_image

Using the Host Network

Using the host network can cause port conflicts and it makes your container less isolated. It's better to use a user-defined network instead.

docker run --network host my_image

Running Unnecessary Background Processes

Running unnecessary background processes consumes resources and makes your containers less focused. Only run the necessary processes for your application.

Running a cron job in a container that doesn't need it.

Ignoring Docker Security Best Practices

Ignoring Docker security best practices can lead to vulnerabilities in your containers. It's better to follow Docker security best practices.

Ignoring Docker security best practices like not running containers as root, not using Docker Content Trust, etc.

By following these Docker best practices and avoiding the Docker anti-patterns, you can ensure that you are effectively using Docker in your development and deployment processes.

1
Subscribe to my newsletter

Read articles from Umair directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Umair
Umair

AWS Certified Security Specialist, highly motivated result-oriented DevSecOps and Cloud Solution Architect with comprehensive hands-on experience in Digital Transformation, DevOps, CI/CD, designing and implementing highly scalable architectures and infrastructure for end-to-end solutions. A positive and multi-skilled character with a proven ability to successfully deliver high-quality solutions with the good use of personal initiatives, very often in complex and challenging customer environments.