How to optimize Dockerfile in Turborepo


(Here is the example turborepo)
If you've ever worked on a Yarn monorepo project with interdependent workspaces, you already know how tricky it can be to build and optimize a Dockerfile that keeps your image lightweight and builds fast. In this article, we’re going to talk about how to build an optimized Dockerfile for a Turborepo project. I'm writing this for my own future reference, but I'd love it if it helps someone else too—and if it does, don't forget to hit that like👍 button!!
What is turborepo and how does it provide value?
Turborepo is similar to a monorepo but comes with some powerful optimizations on top. It's a high-performance build system for JavaScript and TypeScript codebases. Turborepo is designed to scale monorepos, and it also speeds up workflows even in single-package workspaces (that’s how they like to put it). Personally, I also like two of its features: the caching system and the shared packages. These help avoid repeating work and save a lot of time.
Introduction
So, I was building a random project just for learning purposes and wanted to deploy it using Docker. That meant I needed to create a Dockerfile. I had a high-level understanding of Docker, but I had never actually used it before. So, I started learning how Docker works and how to build a Dockerfile for a Turborepo setup.
At first, I just created a basic Dockerfile and deployed it on a server. But I quickly realized that the image size was way too big(4.49 GB). That’s when I started diving deeper into Docker optimization. I googled tons of questions and also took help from AI tools to understand the technical stuff behind Docker images.
So yeah, let’s get into it and see what I learned!
Some Key Docker Concepts You Should Know Before Reading Further
Layering in Docker
Docker cache System
Multistage builds
Layering in Docker:
FROM node:24.0.1-alpine3.20
WORKDIR /usr/src/app
COPY . .
#Install dependencies
RUN yarn install
RUN yarn db:generate
RUN yarn build
CMD ["yarn", "run", "start-user-app"]
When you create a Dockerfile like the one above, each instruction adds a new layer to your Docker image. Think of these layers as steps in a recipe. Here’s how it breaks down:
Each of these layers is cached by Docker. So, if nothing changes in a layer, Docker reuses the cached version, which speeds up rebuilds.
Docker Caching:
As I mentioned above, Docker caches all the layers during the build process. So, if we tweak our Dockerfile a little, we can take advantage of this caching to make our builds faster.
FROM node:24.0.1-alpine3.20
WORKDIR /usr/src/app
COPY package.json yarn.lock turbo.json ./
COPY packages ./packages
COPY apps ./apps
#Install dependencies
RUN yarn install
RUN yarn db:generate
RUN yarn build
CMD ["yarn", "run", "start-user-app"]
In this Dockerfile, we're splitting the COPY
command into multiple steps. The goal is to structure our Dockerfile in a way that maximizes the use of cached layers.
First, we copy files that are less likely to change, like
package.json
,yarn.lock
, andturbo.json
.If these files don’t change, Docker will cache this layer and skip re-installing dependencies.
Then we copy the
packages/
directory, which might change sometimes but not as frequently as the fullapps/
directory.Finally, we copy the
apps/
directory, which tends to change the most during development.
By organizing the COPY
commands like this, Docker can cache the earlier steps even if we make changes in the app code.
🏗️ Docker Multistage Builds:
Docker multistage builds help us create lightweight images by including only the necessary executables. Each FROM
statement starts a new build stage, and you can use different base images for each. What’s powerful here is that you can copy only the required artifacts from one stage to another-leaving behind all the unnecessary dependencies, build tools, and files.
Here’s an example:
# -----------------------
# STAGE 1: Base
# -----------------------
FROM node:20-alpine AS base
# Install turbo globally
RUN yarn global add turbo
# -----------------------
# STAGE 2: Builder
# -----------------------
FROM base AS builder
WORKDIR /usr/src/app
COPY . .
RUN yarn install --frozen-lockfile
RUN yarn build
CMD ["yarn", "run", "start-user-app"]
In the above Dockerfile:
Stage 1 (base):
We use thenode:20-alpine
image and install Turbo globally. This is like our foundation layer that sets up the basic environment.Stage 2 (builder):
This is where we do the actual work — copying the code, installing dependencies, and building the final app. This stage uses the previous one (base) as its base.
How I Built My Optimized Image
So first, what I notice that I am missing the most important part, which is not creating a Dockerignore file due to which all my node_modules
and next cache
files are being part of my image. So I created a .dockerignore
file including all the extra stuff that I don’t want in my image.
# Dependencies
node_modules
.pnp
.pnp.js
# Next.js
.next
out
# Production
build
dist
# Debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Local env files
.env
.env.local
.env.development.local
# Turbo
.turbo
# IDE
.idea
.vscode
# Git
.git
.gitignore
# Docker
Dockerfile*
docker-compose*
.dockerignore
Then I started optimizing my Dockerfile.userapp
by following these steps:
1. deps
stage:
Here, I’m copying all the required dependency files and using Turbo Prune to generate a partial monorepo for the target package. This helps in keeping only what’s really needed for building that package.
2. installer
stage:
This is where we copy the pruned files from the deps
stage and install only the packages required for production using the --production
flag. This avoids pulling in dev dependencies and helps reduce image size.
At the end, I also clear all the cache files to keep things clean.
3. runner
stage:
Here, I create a system group and a non-root user (appuser
) for security purposes.
By default, Docker containers run as
root
, which is risky. Running the app as a non-root user limits what an attacker could do even if they get access.
After that, using multistage builds, I copy only the necessary production files from the installer stage into the runner stage, add some environment metadata, and finally set the CMD
.
And boom — it’s done! ☺️
Here’s the full optimized Dockerfile I ended up with:
FROM node:22-alpine AS base
# Install turbo globally
RUN yarn global add turbo@2.5.3
# -----------------------
# STAGE 1: Dependencies
# -----------------------
FROM base AS deps
WORKDIR /app
# Copy root package files
COPY package.json yarn.lock turbo.json ./
COPY packages ./packages
COPY apps/web ./apps/web
# Use turbo prune to get only the dependencies we need
# This creates a minimal dependency tree
RUN turbo prune --scope=@repo/web --docker
# -----------------------
# STAGE 2: Installer
# -----------------------
FROM base AS installer
WORKDIR /app
ARG DATABASE_URL
ENV DATABASE_URL=$DATABASE_URL
# Copy the pruned dependency files first (better caching)
COPY --from=deps /app/out/json/ ./
# Install dependencies with optimizations
RUN yarn install --frozen-lockfile --production
# Copy source code after dependencies (better layer caching)
COPY --from=deps /app/out/full/ ./
# Generate database client
RUN yarn db:generate
# Build the application
RUN yarn build
# Clean up build artifacts and dev dependencies
RUN yarn install --production --ignore-scripts && \
yarn cache clean && \
rm -rf \
node_modules/.cache \
.yarn/cache \
apps/web/.next/cache \
/tmp/.yarn-cache \
**/.turbo \
**/tsconfig.tsbuildinfo
# -----------------------
# STAGE 3: Runner
# -----------------------
FROM node:22-alpine AS runner
WORKDIR /app
ARG DATABASE_URL
ENV DATABASE_URL=$DATABASE_URL
# Create non-root user for security
RUN addgroup --system --gid 1001 appgroup && \
adduser --system --uid 1001 appuser --ingroup appgroup
# Copy only production files
COPY --from=installer --chown=appuser:appgroup /app/package.json ./
COPY --from=installer --chown=appuser:appgroup /app/yarn.lock ./
COPY --from=installer --chown=appuser:appgroup /app/turbo.json ./
COPY --from=installer --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=installer --chown=appuser:appgroup /app/packages ./packages
COPY --from=installer --chown=appuser:appgroup /app/apps/web/next.config.js ./apps/web/
COPY --from=installer --chown=appuser:appgroup /app/apps/web/.next ./apps/web/.next
COPY --from=installer --chown=appuser:appgroup /app/apps/web/public ./apps/web/public
COPY --from=installer --chown=appuser:appgroup /app/apps/web/package.json ./apps/web/package.json
USER appuser
ENV NODE_ENV=production
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["yarn", "run", "start-user-app"]
Output:
And that’s it!
That’s how I built and optimized a Docker image for a Turborepo project. It took a bit of trial and error, lots of Googling, and a few AI chats (👀), but I learned a lot along the way.
If you're working with monorepos and struggling with Docker, I hope this helps save you some time and headaches. If you found it useful, don’t forget to hit that like button or drop a comment — would love to hear what you're building!
Subscribe to my newsletter
Read articles from Harbinder Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Harbinder Singh
Harbinder Singh
Learning and exploring new tech