Mastering Docker Buildx and BuildKit: Transitioning from Legacy Builds to Multi-Platform Magic
Introduction:
If you've been building Docker containers for a while, you’ve likely come across this message:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release. Install the buildx component to build images with BuildKit.
This signals a big shift in how Docker images are built. Docker’s legacy builder has served us well, but as containerized applications become more complex, the need for advanced build features is greater than ever. Enter Docker Buildx, a modern and flexible tool that leverages BuildKit under the hood.
In this guide, we’ll walk you through why Docker Buildx is a must-have for any modern developer and how you can start using it today to supercharge your Docker builds. We’ll cover everything from setting it up to using parallel builds and multi-platform architecture. By the end of this tutorial, you’ll be fully equipped to leave the legacy builder behind and unlock a whole new set of capabilities.
1. Why Move to Docker Buildx?
Docker Buildx introduces a host of powerful features that make it far superior to the old builder. Here’s why developers are making the switch:
Multi-Platform Builds: Build images that work seamlessly across different CPU architectures (e.g.,
amd64
,arm64
) in a single workflow.Improved Caching: Efficient caching mechanisms drastically speed up build times by reusing unchanged layers.
Parallel Builds: Build multiple stages of a Dockerfile in parallel, reducing the overall build time.
Advanced Dockerfile Syntax: Unlock more advanced Dockerfile capabilities such as conditional logic and custom frontends.
The legacy builder doesn’t offer these modern capabilities. Buildx, powered by BuildKit, is designed for a more dynamic container ecosystem. Now that we know why Buildx is essential, let’s see how to enable and use it.
2. Setting Up Docker Buildx
Before jumping into the advanced features, let’s start by setting up Docker Buildx. Most Docker installations come with Buildx, but it needs to be enabled.
BuildKit is the default builder for users on Docker Desktop and Docker Engine v23.0 and later.
If you have installed Docker Desktop, you don't need to enable BuildKit. If you are running a version of Docker Engine version earlier than 23.0, you can enable BuildKit either by setting an environment variable, or by making BuildKit the default setting in the daemon configuration.
Step 1: Verify Buildx Installation
Run the following command to check if Buildx is installed on your system:
docker buildx version
If Buildx is installed, this command will return the version number. If not, you’ll need to install it.
Step 2: Enable Docker BuildKit
Buildx uses BuildKit for advanced build functionalities. You can enable BuildKit by setting the following environment variable:
export DOCKER_BUILDKIT=1
Alternatively, you can add this to your Docker daemon configuration file (/etc/docker/daemon.json
) to enable it permanently:
{
"features": {
"buildkit": true
}
}
Restart Docker after editing the configuration:
sudo systemctl restart docker
Step 3: Create and Use a New Buildx Builder
Now, create a new builder instance to use Buildx:
docker buildx create --name demo-buildx-builder --driver docker-container \
--use --platform linux/arm64,linux/arm/v7
This command creates a new builder named demo-buildx-builder
and sets it as the default builder.
To check the builders currently in use, run:
docker buildx ls
This confirms that your builder is active, and you’re now ready to start using Docker Buildx buildkit
3. Understanding how buildkit works and its components
3.1 LLB (Low-Level Build):-
LLB stands for Low-Level Build, which is essentially an intermediate representation of the build process. Think of LLB as a more detailed and structured version of the build instructions that Docker BuildKit uses internally. It breaks down Dockerfile instructions into a format that can be optimized and executed efficiently.
How LLB Works in BuildKit:
When you run
docker buildx build
with a Dockerfile, BuildKit first translates that Dockerfile into an LLB graph.The LLB graph represents the various stages, layers, and dependencies in a clear, defined structure that BuildKit uses to optimize the build.
LLB is designed to be reusable and cachable, meaning that BuildKit can skip unnecessary steps by recognizing unchanged parts of the build process and using cached results.
LLB makes the entire process more declarative and modular, giving BuildKit the ability to distribute and parallelize the build steps efficiently.
Example:
For instance, if your Dockerfile has this instruction:
RUN apt-get update && apt-get install -y curl
BuildKit will break it down into individual steps (like running apt-get update
, then apt-get install
), represent them in the LLB graph, and manage how the cache interacts with each step to avoid redundant work in future builds.
LLB can be generated directly using a golang client package that allows defining the relationships between your build operations using Go language primitives. This gives you full power to run anything you can imagine, but will probably not be how most people will define their builds. Instead, most users would use a frontend component, or LLB nested invocation, to run a prepared set of build steps.
3.2 Frontend in BuildKit
In Docker BuildKit, the frontend is responsible for converting high-level build definitions (like a Dockerfile) into the LLB representation that BuildKit can process. Essentially, it acts as the translator between the human-readable Dockerfile and the machine-readable LLB.
Key Points About Frontend:
The default frontend is the Dockerfile, meaning BuildKit is designed to interpret standard Dockerfiles and convert them into LLB.
You can also use custom frontends to interpret non-Dockerfile build definitions. For example, BuildKit can handle frontends written in different languages, like JSON or even languages designed for specific build optimizations.
The flexibility of frontends means you can extend BuildKit to support different formats or even custom build logic, beyond the standard Dockerfile.
BuildKit supports loading frontends dynamically from container images. To use an external Dockerfile frontend, the first line of your Dockerfile needs to set the syntax
directive pointing to the specific image you want to use:
# syntax=[remote image reference]
For example:
# syntax=docker/dockerfile:1
# syntax=docker.io/docker/dockerfile:1
# syntax=example.com/user/repo:tag@sha256:abcdef...
You can also use the predefined BUILDKIT_SYNTAX
build argument to set the frontend image reference on the command line:
docker build --build-arg BUILDKIT_SYNTAX=docker/dockerfile:1 .
Custom Dockerfile implementations allow you to:
Automatically get bug fixes without updating the Docker daemon
Make sure all users are using the same implementation to build your Dockerfile
Use the latest features without updating the Docker daemon
Try out new features or third-party features before they are integrated in the Docker daemon
Tutorial: Using docker buildkit for building Multi-Platform and running build stages parallely
Multi platform images :-
A multi-platform build refers to a single build invocation that targets multiple different operating system or CPU architecture combinations. When building images, this lets you create a single image that can run on multiple platforms, such as linux/amd64
, linux/arm64
, and windows/amd64
.
Lets build a multi platform image application that reads the architectures details to a file and outputs that file content.
Below is the Dockerfile for multi platform app
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log
CMD ["/bin/bash","-c","cat /log"]
Building the Multi platform image :-
While you are building the images you need to specify the list of platforms to which you want to build the docker image
Replace the username gkemhcs
with your dockerhub username
docker build -t gkemhcs/echo:v1 --push --platform linux/amd64,linux/arm64 --builder demo-buildx-builder -f Dockerfile .
Run the docker container:-
docker container run -it gkemhcs/echo:v1
and the output is like this below :-
I am running on linux/amd64, building for linux/amd64
Parallel Stages:-
By default, Docker builds the stages sequentially. However, with Docker Buildx, we can build stages in parallel, which significantly reduces build times when you have independent stages.
Here’s the Dockerfile
that uses multi-stage builds:
# Stage 1: Install Terraform
FROM alpine:3.18 AS terraform-builder
# Install dependencies and Terraform
RUN apk add --no-cache curl unzip bash \
&& curl -s https://releases.hashicorp.com/terraform/1.5.0/terraform_1.5.0_linux_amd64.zip -o terraform.zip \
&& unzip terraform.zip \
&& mv terraform /usr/local/bin/terraform \
&& chmod +x /usr/local/bin/terraform
# Stage 2: Install kubectl
FROM alpine:3.18 AS kubectl-builder
# Install dependencies and kubectl
RUN apk add --no-cache curl bash \
&& curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
&& install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Stage 3: Install Docker CLI
FROM alpine:3.18 AS docker-builder
# Install Docker CLI
RUN apk add --no-cache docker-cli
# Final Stage: Combine all tools in a minimal image
FROM alpine:3.18
# Copy terraform from terraform-builder
COPY --from=terraform-builder /usr/local/bin/terraform /usr/local/bin/terraform
# Copy kubectl from kubectl-builder
COPY --from=kubectl-builder /usr/local/bin/kubectl /usr/local/bin/kubectl
# Copy docker from docker-builder
COPY --from=docker-builder /usr/bin/docker /usr/local/bin/docker
# Install additional dependencies if needed
RUN apk add --no-cache bash
# Set up a default entrypoint
# Set up a default entrypoint
ENTRYPOINT ["/bin/sh", "-c", "docker --version && kubectl version --client && terraform version"]
This command will run all stages in parallel, downloading dependencies, and building the individual tools in parallel. It significantly speeds up the build process because each stage (terraform-builder
, kubectl-builder
, and docker-builder
) runs concurrently.
Building the docker image using buildx
docker buildx -t cli-app-version-checker .
Running the image to verify that it outputs the versions of kubectl ,docker and terraform
docker container run -it --name cli-app cli-app-version-checker
after running above command you will see the output like below, which prints versions of docker ,kubectl, terraform cli installed into container
Docker version 25.0.5, build d260a54c81efcc3f00fe67dee78c94b16c2f8692
Client Version: v1.31.0
Kustomize Version: v5.4.2
Terraform v1.5.0
on linux_amd64
Your version of Terraform is out of date! The latest version
is 1.9.5. You can update by downloading from https://www.terraform.io/downloads.html
Closing :- Docker Buildx and BuildKit bring powerful capabilities to Docker builds, including multi-platform support and parallel stages. These tools enable faster, more flexible, and scalable builds, which is essential for modern DevOps workflows.
For more detailed information, visit Docker's official documentation about buildkit. If you found this post helpful, like👍 and share—it helps me create more content!
DevOps is all about sharing.
Subscribe to my newsletter
Read articles from GUDI KOTI ESWAR MANI directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by