Managing CI/CD pipelines in a JavaScript monorepo with CircleCI's dynamic configuration

Wherever you stand on the Monorepo vs polyrepo debate, managing continuous integration presents its own set of challenges, especially as your application scales up. Usually, when working with CircleCI in a polyrepo setup (where each project lives in a separate Git repository), you create individual pipelines for each project using a config.yml
file. This approach runs the same jobs or workflows every time an event triggers the pipeline; a process known as static configuration. However, this strategy can quickly become inefficient when working in a monorepo containing multiple projects or services. Jobs will often run unnecessarily on unchanged services, leading to slower pipeline execution and wasteful builds.
With CircleCI’s dynamic configuration, you can ensure that only the jobs and workflows related to a modified service will run when you push your commits. Dynamic configuration is a mechanism that allows programmatic execution of your CI/CD pipeline based on pre-defined parameters and conditions, providing the benefit of efficiency, speed, and scalability.
In this tutorial, you will learn how to set up a CircleCI pipeline to automatically detect changes in your repo and trigger targeted workflows accordingly. You will define jobs and configure conditional workflows for a monorepo containing a React frontend and a Dockerized Node.js + MySQL backend.
Prerequisites
Before you begin, make sure you have the following:
A CircleCI account.
A Docker account.
A GitHub account.
Git installed on your computer.
An IDE or code editor such as VSCode.
Basic familiarity with CI/CD concepts and Docker.
Setting up dynamic configuration
With all of the prerequisites in place, it’s time to configure the monorepo. For this tutorial, you’ll be working with this simple JavaScript project.
Start by forking the repository on GitHub and cloning it to your local machine:
git clone <your-fork-url>
# navigate into the project directory
cd cara-store-catalog
From the project’s root directory, create a .circleci
directory, along with the following files:
mkdir .circleci
cd .circleci
touch config.yml continue_config.yml
The config.yml
file in the .circleci
folder at the root of your project is where your pipeline’s main configuration lives. When you connect a repo to CircleCI, it automatically detects this file and begins orchestrating your workflows. When implementing dynamic configuration, this file also handles the initial setup phase.
The continue_config.yml
contains the continuation configuration. It outlines the jobs to run and conditionally triggers workflows based on the pipeline parameters defined in config.yml
.
Let us begin.
The setup phase
Open the config.yml
file in your code editor and enter the code below:
version: 2.1
setup: true
orbs:
path-filtering: circleci/path-filtering@2.0.1
workflows:
filter-path:
jobs:
- path-filtering/filter:
name: detect-modified-directories
# <directory> <pipeline parameter> <value>
mapping: |
server/.* run-server-jobs true
client/.* run-client-jobs true
.circleci/.* run-server-jobs true
.circleci/.* run-client-jobs true
base-revision: main
The code above defines the initial pipeline configuration. The setup: true
field tells CircleCI to enable dynamic configuration features for this file. The orbs
key imports the path filtering orb to simplify the process of detecting modified paths and files. Behind the scenes, this orb also uses the continuation orb to trigger the next phase of the pipeline using updated parameter values.
The filter
job, available by default on the path-filtering
orb, maps each directory where you wish to detect changes to a pipeline parameter and sets an initial value for the parameter. What this means is, when the filter
job detects file changes in, say, server/
, it sets the run-server-jobs
parameter to true
and passes it on to the continuation config. You can map additional paths as you wish. Finally, base-revision
indicates which branch to compare for changes.
That covers all the logic required for the config.yml
file.
Continuation configuration
Next up is the continue_config.yml
file. As mentioned earlier, this is where you will define the jobs and workflows you want to run.
Start by declaring the CircleCI version along with default values for the pipeline parameters from the setup phase:
version: 2.1
parameters:
run-server-jobs:
type: boolean
default: false
run-client-jobs:
type: boolean
default: false
To avoid needless code repetition across your config file, CircleCI provides the convenience of reusable configuration. Think of this as a feature similar to “functions” from traditional programming languages that you can call at various points in your code. You can learn more about this in the reusable configuration reference.
Go ahead and define a reusable executor
and a command
in your continuation_config.yml
:
executors:
node-exec:
docker:
- image: cimg/node:22.17
commands:
installdeps:
description: "Install dependencies"
parameters:
directory:
type: string
steps:
- checkout:
path: ~/project
- restore_cache:
keys:
- v1-<< parameters.directory >>-deps-{{ checksum "package.json" }}-{{ checksum "package-lock.json" }}
- v1-<< parameters.directory >>-deps-{{ checksum "package.json" }}
- v1-<< parameters.directory >>-deps-
- run:
name: Install dependencies
command: npm ci
- save_cache:
key: v1-<< parameters.directory >>-deps-{{ checksum "package.json" }}-{{ checksum "package-lock.json" }}
paths:
- node_modules
The node-exec
executor sets a docker
execution environment using the CircleCI Node.js convenience image. The installdeps
command lays out a typical process for installing Node.js dependencies in CircleCI. The command takes a parameter called directory
of type string
, which each job will pass at the point of invocation. Instead of hardcoded keys like v1-server-deps
to save and restore dependencies, the caching steps use this dynamic string with « parameters.directory »
, allowing multiple jobs to reuse the command.
Following this is an outline of a series of steps
:
checkout
clones the repository. This step specifies a path because, as you will see later, each set of jobs that uses this step will do so from a different working directory. Doing this ensures CircleCI checks out the code to the right directory.restore_cache
tells CircleCI to reuse a previously saved cache of installed dependencies, if any, to minimize redundancy. Thekeys
attribute uses thedirectory
parameter to read the saved cache. If it fails to find an exact match, it gracefully falls back to less specific keys.The
npm ci
command does a clean installation of the dependencies in yourpackage.json
file. It is typically used in CI environments and requires an existingpackage-lock.json
file. See the npm-ci docs for more information about using this command.With
save_cache
, you are instructing CircleCI to store a cache of the dependencies based on the checksums ofpackage.json
andpackage-lock.json
. You normally would not want CircleCI to install the same dependencies from scratch every time you push your code, and those files have not changed. That would slow down your execution while also using up precious compute resources. See the Caching dependencies page for more information about how it works.
Now it’s time to define the jobs your pipeline will run:
jobs:
build-client:
executor: node-exec
working_directory: ~/project/client
steps:
- installdeps:
directory: client
- run:
name: Build client app
command: npm run build
test-client:
executor: node-exec
working_directory: ~/project/client
steps:
- installdeps:
directory: client
- run:
name: Run client tests
command: npm test
The build-client
and test-client
jobs share a similar structure:
Apply
node-exec
as the executor environment.Set the working directory to the
client
folder.Invoke the
installdeps
command and pass theclient
value forsave_cache
andrestore_cache
to use.Run the corresponding commands to build and test the code.
jobs
key.Add the test-server
job to your continuation config:
test-server:
docker:
- image: cimg/node:22.17
- image: cimg/mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: djs0_32
working_directory: ~/project/server
steps:
- installdeps:
directory: server
- run:
name: Wait for MySQL
command: dockerize -wait tcp://localhost:3306 -timeout 1m
- run:
name: Install MySQL client
command: |
sudo apt-get update
sudo apt-get install default-mysql-client
- run:
name: Set up database
command: |
mysql -h 127.0.0.1 -u root -pdjs0_32 -e "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED WITH mysql_native_password BY '$MYSQL_PASSWORD'"
mysql -h 127.0.0.1 -u root -pdjs0_32 < db/init.sql
- run:
name: Run server tests
command: npm test
The test-server
job sets up the following:
A primary container:
cimg/node:22.17
.A secondary/service container:
cimg/mysql:8.0
.A root password for the MySQL database for initial access.
And, as mentioned earlier, the working directory to run the steps in.
It then invokes the installdeps
command and passes the server
value. To avoid race conditions, the job uses dockerize
to wait for the container to start before attempting to use it. Once ready, it installs the MySQL client, sets up the database by creating a user and running the server/db/init.sql
script, and then runs the server test suite.
Next, add the publish-server
job:
publish-server:
docker:
- image: cimg/base:current
working_directory: ~/project/server
steps:
- checkout:
path: ~/project
- setup_remote_docker:
docker_layer_caching: true
- run:
name: Set image tag
command: |
IMAGE_TAG=$(jq -r '.version' package.json)
echo "export IMAGE_TAG=$IMAGE_TAG" >> $BASH_ENV
echo "IMAGE_TAG: $IMAGE_TAG"
source $BASH_ENV
- run:
name: Build production image
command: |
docker build -t $DOCKERHUB_USERNAME/dynamic-config:$IMAGE_TAG --target prod .
- run:
name: Authenticate, tag, and push image to Docker Hub
command: |
echo "$DOCKERHUB_PASSWORD" | docker login -u $DOCKERHUB_USERNAME --password-stdin
docker tag $DOCKERHUB_USERNAME/dynamic-config:$IMAGE_TAG $DOCKERHUB_USERNAME/dynamic-config:latest
docker push $DOCKERHUB_USERNAME/dynamic-config:$IMAGE_TAG
docker push $DOCKERHUB_USERNAME/dynamic-config:latest
Here is a rundown of what this block does:
Sets
setup_remote_docker: true
to enable Docker layer caching for reuse in future builds.Extracts the version number from the server’s
package.json
file to tag the image.Builds a production version of the image. The
FROM base AS prod
line in theserver/Dockerfile
outlines the build instructions.Authenticates with Docker, sets the
latest
tag on the built image, and pushes both tags to Docker Hub.
And for the final piece of the continuation config:
workflows:
test-and-publish-server:
when: << pipeline.parameters.run-server-jobs >>
jobs:
- test-server
- publish-server:
requires:
- test-server
build-and-test-client:
when: << not pipeline.parameters.run-client-jobs >>
jobs:
- build-client
- test-client:
requires:
- build-client
Workflows orchestrate the jobs in your config. Here, the when
clause directs CircleCI to run the specified jobs only when the pipeline parameter is true. And with requires
, you are pausing the execution of a job until the previous one is successful.
That concludes the configuration setup.
Setting the environment variables
If you take a look at some of the project files, like index.js
and compose.yaml
, you will notice the use of a number of environment variables. These are necessary to avoid exposing sensitive credentials on GitHub.
To set your project variables, start by pushing the changes you have made so CircleCI can detect your .circleci/config.yml
file. From your project’s root directory, enter the following commands:
git add .
git commit -m '<add-your-commit-message>'
git push -u origin main
Next, follow the instructions on the Set up a project page to connect your repo. After linking your project, CircleCI will kick off a pipeline and begin running all the workflows. The initial run will likely fail, but you can fix that right away.
Navigate to the Set an environment variable page and follow the instructions to add the following to your project:
MYSQL_USER=carastore_admin
MYSQL_PASSWORD=<assign-any-value>
MYSQL_DATABASE=carastore_catalog
DOCKERHUB_USERNAME=<your-dockerhub-username>
DOCKERHUB_PASSWORD=<your-dockerhub-password>
Please note that you may also assign any values you want to the MySQL user and database; however, you would have to update them accordingly in the server/db/init.sql
file.
Testing your pipeline
With your configuration and environment variables in place, CircleCI can now automatically detect which folders you have modified and then run your workflows based on those modifications. To see this in full action, edit a file inside the client/
or server/
folder.
Commit and push your changes:
git add .
git commit -m '<add-your-commit-message>'
git push -u origin main
Open the project on your dashboard and watch as CircleCI triggers a new pipeline that runs only the corresponding workflow, demonstrating the efficiency of dynamic configuration.
Conclusion
Running jobs for unchanged services in a monorepo can result in wasted resources, slower pipelines, and increased costs in the long run. By implementing dynamic configuration, you can mitigate these downsides and optimize your pipelines for speed and efficiency.
I hope this tutorial has helped you understand how to improve your CI/CD workflows. If you wish to see the complete setup of the config files, check out the dynamic_config
branch of the project repository. For more examples on how to further customise your workflows, see the CircleCI guide on Using dynamic configuration.
Subscribe to my newsletter
Read articles from Danny Santino directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
