Step-by-Step Guide to Jenkins CI/CD for Three-Tier Apps

1) Overview
In this blog, we will explore Jenkins setup and try to build a CI/CD pipeline in it.
We will be making a pipeline for a simple three-tier app (React, Node.js, MySQL).
2) Setup of Jenkins
We will be doing the setup of Jenkins on a local machine (on cloud, most steps are the same).
Jenkins is written in Java, so first we need Java.
We will do this for Arch Linux. If you are using Debian-based, Red Hat-based, or any other distro, refer https://www.jenkins.io/doc/book/installing/linux/.
Installing Java
sudo pacman -S jdk-openjdk
Installing Jenkins
sudo pacman -S jenkins
sudo systemctl start jenkins.service
sudo systemctl enable jenkins.service
Your Jenkins should be running at:
http://localhost:8080
If that doesn't work, try
http://localhost:8090
Find initial admin password
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
After entering the initial admin password, you’ll set up a username, password, and other account details.
Then, select the Suggested plugins (you can choose manually, but if you already knew what to choose, you probably wouldn’t be here — so just go with suggested for now).
3) Checkup if Jenkins is Running Properly
Create a new job:
Name:
demo-pipeline
Type: Pipeline
Paste this pipeline script:
pipeline {
agent any
stages {
stage('demo') {
steps {
sh "mkdir demo-dir"
}
}
}
}
After saving, click Build Now.
On the left bottom side, you will see Builds #1.
If it’s green, congrats — the build was successful!
Click on it and then select Console Output.
You’ll see something like:
Running on Jenkins in /var/lib/jenkins/workspace/demo-pipeline
This means everything happens inside this directory. To inspect:
sudo -i
cd /var/lib/jenkins/workspace/
You’ll see a folder demo-pipeline
, which is our current pipeline. Inside that is the demo-dir
we created.
You can now delete both this pipeline and its directory, as they were just for demo purposes.
This means our Jenkins is working just fine.
If it doesn’t work, the reason could be that the user isn’t part of the Docker group.
In the pipeline, add sh "whoami"
to check which is the current user, then add that user explicitly to the Docker group:
sudo usermod -aG docker <username>
newgrp docker
4) Pipeline in Question
We will make pipeline of three tier docker application if you want to know how to make docker files and work with docker refer https://devops-digest-daily.hashnode.dev/three-tier-docker-architecture.
Everything in Jenkins is a job, so create a job (name whatever you like) of type Pipeline.
Give a description (optional).
Check the GitHub project checkbox.
This is important — not for automatic cloning or pushing, but for GitHub SCM trigger usage, which we’ll use later but yeah optional for now, uncheck it no problem.
What URL to put in GitHub Project section?
Fork this repo: https://github.com/Keshav005Jhalani/three-tier-docker
Then clone it to your local.
Use your fork’s URL in the pipeline below — not the original.
Why? Because if you use my repo, you won’t be able to push changes, and the CI/CD purpose would be defeated (your local changes should trigger builds).
Scroll to the Pipeline section, and paste:
pipeline {
agent any
stages {
stage('Clone') {
steps {
//sh "whoami" in order to check who is the user and then add to docker grp if not added
git url: 'https://github.com/Keshav005Jhalani/three-tier-docker.git', branch:'main'
}
}
stage('Build & Deploy') {
steps {
echo 'Building'
sh "docker compose down && docker compose up -d"
echo "Build complete run the app"
}
}
}
}
Build the pipeline.
Open http://localhost
in your browser — and boom! Your site should be live.
If the site doesn’t open:
Check Console Output logs
Maybe Docker isn’t installed or enabled or current user isn’t added to docker group.
Maybe
docker compose
is not installed
Explanation of Pipeline
agent any
: Use any available agent.
Jenkins works on a master-slave architecture. In our case, there are no agents, so the Jenkins server handles the job itself. In production, it’s recommended to use agents.We have two stages:
Clone: Pulls the code from your GitHub repo (use your forked repo URL).
Build & Deploy: Runs
docker compose down
to remove old containers anddocker compose up -d
to build and run new ones.
This avoids errors like “port already allocated” or “container with same name exists” during repeated runs. If we already have docker container running and we run the pipeline for next build it will cause error so down it first before doing up again.
Note: Running the same pipeline multiple times won’t re-clone the repo. The first time clones it, subsequent runs just fetch changes. So, no “repo already exists” error.
If encountering any error just lemme know via comments.
Subscribe to my newsletter
Read articles from Keshav Jhalani directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
