Jenkins End-to-End CICD Pipeline Project

Table of contents
- Introduction
- Pre-Requisites
- Running the Application Locally
- Launch and Connect to an EC2 Instance
- Logging and Setting Up Jenkins Pipeline
- Writing the Jenkinsfile for CI/CD Pipeline
- Generating a SonarQube Token and Configuring It in Jenkins
- Setting Up Minikube and ArgoCD-Operator
- Running the Pipeline and Setting Up ArgoCD
- Creating an Application in ArgoCD
- Conclusion

Introduction
Welcome to the world of DevOps! In this project, we will be building a complete end-to-end Jenkins Pipeline for a Java-based application managed with Maven. The source code will be hosted on a GitHub repository, serving as the foundation of our application. Our Jenkins pipeline will automatically trigger using a configured webhook, checkout the code from GitHub, build the application using Maven, and perform a static code analysis with SonarQube. If both stages succeed, the pipeline will then build a Docker image and push it to Docker Hub. In the event of a failure during build or scan, detailed logs will be sent as a Slack notification for prompt awareness. Instead of using ArgoCD Image Updater, we will rely on shell scripts to update the Docker image and maintain a separate GitHub repository as the manifest repository, following the GitOps approach. Finally, the application will be deployed on a Kubernetes cluster using ArgoCD, completing our DevOps lifecycle.
Pre-Requisites
Before starting the project, ensure the following components are installed and properly configured on your system:
Java (JDK 8 or later): For building and running the Java application.
Maven: To compile and manage dependencies for the Java project.
Git: To clone and manage the source code from GitHub.
GitHub Account & Repository: To host your Java project and GitOps manifests.
Jenkins: To automate the CI/CD pipeline.
SonarQube: For performing static code analysis.
Docker: To build and push Docker images.
Docker Hub Account: To store and manage Docker images.
Slack Account & Webhook URL: For sending notifications on build or scan failures.
Shell Scripting Environment: To update Docker image tags and handle GitOps-related tasks.
Minikube: A local Kubernetes cluster for testing deployments and integrating with ArgoCD.
ArgoCD: To deploy and manage applications on Kubernetes following the GitOps approach.
Kubectl: To interact with the Minikube Kubernetes cluster.
Running the Application Locally
After completing the prerequisites, the next step is to configure Jenkins and kick off our pipeline setup. Begin by installing the essential plugins in Jenkins—these will enable seamless integration with our tools and help automate the entire build and deployment process.
To run the project locally, follow these steps :
Clone the repository:
git clone https://github.com/himanthakula/Jenkins-Zero-To-Hero
Navigate to the project directory:
cd Jenkins-Zero-To-Hero/java-maven-sonar-argocd-helm-k8s/spring-boot-app
Build the Maven project by executing the following command to generate the necessary artifacts:
mvn clean package
To create and run a Docker container for the application. Use the following commands:
docker build -t ultimate-cicd-pipeline:v1 . docker run -d -p 8010:8080 -t ultimate-cicd-pipeline:v1
Once these commands are executed, the application will be available locally at http://localhost:8010 . Open this URL in your browser to confirm that everything is functioning properly before moving on to the Kubernetes deployment. You should see the following output:
Now that the application is running successfully in our local browser, it's time to move ahead and deploy it to Kubernetes using a CI/CD pipeline.
Launch and Connect to an EC2 Instance
Log in to AWS Console:
→ Go to https://console.aws.amazon.com
→ Sign in with your AWS account credentials.
Launch a New Instance:
→ On the EC2 Dashboard, click “Launch instance”.
Configure Instance Details:
→ Name: Give your instance a name, e.g.,
ultimate-cicd
.→ Application and OS Images (Amazon Machine Image - AMI): Ubuntu
→ Instance Type:
t2.large
→ Key Pair (Login): Select an existing key pair or create a new one.
Launch the Instance:
→ Click “Launch Instance”.
→ Wait a few seconds for the instance to be initialized.
Verify the Instance:
→ Go to the Instances section on the left sidebar.
→ You should see your instance named
ultimate-cicd
with status running.Connect via SSH:
ssh -i "your-key.pem" ubuntu@<public-ip-address>
Updations and required installations:
→
sudo apt update
command refreshes our system’s package list, ensuring we get the latest versions available. It’s a crucial step before installing or upgrading software on our EC2 instance.→
sudo apt install openjdk-17-jre
installs the Java Runtime Environment (JRE) version 17 on our system. It's needed to run Java applications, including tools like Jenkins and many Spring Boot apps.→ It is required to install jenkins which plays crucial role in this project.
curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee \ /usr/share/keyrings/jenkins-keyring.asc > /dev/null echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \ https://pkg.jenkins.io/debian binary/ | sudo tee \ /etc/apt/sources.list.d/jenkins.list > /dev/null
These prepare our system to install Jenkins and now to install Jenkins use:
Logging and Setting Up Jenkins Pipeline
Login Jenkins: Begin by logging into your Jenkins server. If you're unsure, check out this link.
Create a New Pipeline Project:
→ Go to New Item on your Jenkins dashboard.
→ Enter the project name as
"Ultimate-Demo"
.→ Choose Pipeline as the project type and click OK.
Configure Pipeline from SCM:
→ In the pipeline configuration, select Pipeline script from SCM.
→ For SCM, choose Git.
→ In the Repository URL field, enter the GitHub repository URL of the project: Git URL
→ Set the Branch as
"main"
.Define the Script Path:
→ In the Script Path field, specify the path to the Jenkins pipeline file:
→
java-maven-sonar-argocd-helm-k8s/spring-boot-app/JenkinsFile
.
This pipeline setup connects your Jenkins server to the source code in your GitHub repository, with the pipeline script (JenkinsFile) outlining the steps Jenkins follows to build, test, and deploy the application.
Writing the Jenkinsfile for CI/CD Pipeline
Now that our Jenkins pipeline is in place, let’s take a closer look at the Jenkinsfile, which outlines the steps for building, testing, and deploying our application. You can find this file in the GitHub repository at: blob/main/java-maven-sonar-argocd-helm-k8s/spring-boot-app/JenkinsFile
.
Let's go through each stage of the pipeline:
Agent Configuration:
→ We're utilizing a Docker agent based on the image
'abhishekf5/maven-abhishek-docker-agent:v1'
.→ This agent mounts the host's Docker socket, enabling the pipeline to execute Docker commands during the build process.
Stage 1: Checkout
→ In this stage, Jenkins is supposed to pull the project’s source code from GitHub. Right now, that part is commented out and replaced with a simple
sh 'echo passed'
command. However, the real command should be used to actually check out the code.git branch: 'main', url: 'https://github.com/himanthakula/Jenkins-Zero-To-Hero.git'
Stage 2: Build and Test
→ This stage executes the Maven build for the Java project and creates a JAR file.
→ It moves into the Spring Boot app directory (
java-maven-sonar-argocd-helm-k8s/spring-boot-app
) and 43runs the build command.
mvn clean package
Stage 3: Static Code Analysis (SonarQube)
→ In this stage, SonarQube is used to perform static code analysis and evaluate code quality.
→ The SonarQube server is accessible at
http://<YOUR-IP-ADDRESS>:9000
.→ Jenkins fetches the SonarQube authentication token from stored credentials and uses Maven to run the SonarQube scan on the project.
mvn sonar:sonar -Dsonar.login=$SONAR_AUTH_TOKEN -Dsonar.host.url=${SONAR_URL}
Stage 4: Build and Push Docker Image
→ In this stage, a Docker image is built from the Spring Boot application and then pushed to Docker Hub.
→ The image is tagged dynamically using the Jenkins build number (
${BUILD_NUMBER}
).→ The image is tagged dynamically using the Jenkins build number (
${BUILD_NUMBER}
).Stage 5: Update Deployment File
→ This stage focuses on updating the Kubernetes deployment manifest with the latest Docker image tag.
→ It modifies the
deployment.yml
file to reflect the new image tag, then commits and pushes the changes to the GitHub repository using a GitHub token for authentication.
Before executing the pipeline, it's important to install the required plugins and configure SonarQube appropriately.
Install the Following Jenkins Plugins:
→ Docker Pipeline Plugin: To allow Jenkins to build and run Docker images.
→ SonarQube Scanner Plugin: To enable SonarQube integration for code analysis.
Install SonarQube on EC2 instance:
→ SonarQube needs to be installed for our static code analysis.
→ Install the unzip utility.
apt install unzip
→ Then it is necessary to add a SonarQube user under the root account.
adduser sonarqube
After adding sonarqube as a user follow these steps to install.
wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.4.0.54424.zip
Now use.
unzip *
Setup the correct permissions for the SonarQube directory.
chmod -R 755 /home/sonarqube/sonarqube-9.4.0.54424
chown -R sonarqube:sonarqube /home/sonarqube/sonarqube-9.4.0.54424
Navigate to the SonarQube executable directory.
cd sonarqube-9.4.0.54424/bin/linux-x86-64/
Now start the SonarQube server.
./sonar.sh start
Now we can access the SonarQube Server
on http://<ip-address>:9000
and the dashboard looks like.
The Login and Password for the sonarqube server is admin (for both).
Generating a SonarQube Token and Configuring It in Jenkins
With Docker and SonarQube now configured on your system, the next step is to generate an authentication token in SonarQube and connect it to Jenkins. This token enables Jenkins to interact with SonarQube for performing code analysis.
Here’s how you can create the token and add it to Jenkins:
Create a Token in SonarQube:
Open your SonarQube dashboard and go to your Account Settings.
Navigate to the Security Tab.
Under Generate Token, enter
"Jenkinsfile"
as the token name and click Create Token.Copy the token that gets generated. Make sure to save it somewhere safe as it won’t be shown again.
Add the Token to Jenkins:
Head to your Jenkins dashboard, then go to Manage Jenkins.
Select Manage Credentials.
Under Global Credentials, click Add Credentials.
In the Kind dropdown, select Secret Text.
Paste the SonarQube token you just copied into the Secret field.
In the ID field, enter
"Sonarqube"
.Click Create.
By adding this token, Jenkins will be able to authenticate with SonarQube and perform static code analysis in the pipeline.
Setting Up Minikube and ArgoCD-Operator
With SonarQube and Jenkins now configured, the next step is to set up a Kubernetes cluster using Minikube and install ArgoCD to manage our deployment pipeline.
Create a Minikube Cluster: Begin by provisioning a Minikube cluster. Use the command below to start the cluster with 4GB of memory, utilizing the Hyperkit [works only for Mac not Windows] driver:
minikube start --memory=4096 --driver=hyperkit
Once the cluster is up and running, we can proceed to install ArgoCD.
Install the ArgoCD Operator: To deploy ArgoCD within the Minikube cluster, execute the following commands:
→ First, install the Operator Lifecycle Manager (OLM), which will facilitate the management of the ArgoCD Operator.
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.28.0/install.sh | bash -s v0.28.0
→ Now, install the ArgoCD operator:
kubectl create -f https://operatorhub.io/install/argocd-operator.yaml
→ Finally, verify the installation by checking the ClusterServiceVersion (CSV):
kubectl get csv -n operators
Configure Jenkins Credentials: After installing ArgoCD, we need to set up some credentials in Jenkins for seamless integration:
→ Docker Credentials:
Add your Docker Hub username and password to Jenkins.
Navigate to Manage Jenkins > Manage Credentials.
Create a new credential of type "Username with password".
Enter your Docker Hub username and password.
Save the credential with the ID:
docker-cred
.
→ GitHub Token:
Generate a Personal Access Token (PAT) from your GitHub account with appropriate repository access.
In Jenkins, add a new credential of type "Secret text".
Paste the GitHub PAT and save it with the ID:
GITHUB_TOKEN
.RESTART the Jenkins.
With the Minikube cluster and ArgoCD successfully set up, and the necessary Jenkins credentials configured.
Update SonarQube URL in Jenkinsfile on GitHub
→ Commit the SonarQube server URL to the
Jenkinsfile
in our GitHub repository.→ This ensures Jenkins can connect to SonarQube during the pipeline execution for code analysis.
With the Minikube cluster and ArgoCD in place, and the required credentials set up in Jenkins, we’re ready to move forward with deploying our application to Kubernetes!
Running the Pipeline and Setting Up ArgoCD
With all configurations in place, we can now trigger(tap Build Now) the Jenkins pipeline. Upon execution, the pipeline will run through all the stages defined in the Jenkinsfile
.
Pipeline Stages Overview
→ Checkout:
This stage pulls the latest code from the configured GitHub repository, ensuring the pipeline always runs against the most recent changes.→ Build and Test:
Maven is used to compile the application and package it into a JAR file by running themvn clean package
command.→ Static Code Analysis:
The source code is scanned using SonarQube. Upon completion, the project and its analysis results will be visible under the Projects tab in the SonarQube dashboard.→ Build and Push Docker Image:
A Docker image of the application is built and tagged ashimanthakula/ultimate-cicd
. The image is then pushed to the configured Docker Hub account.→ Update Deployment File:
In this final stage, the Kubernetes deployment manifest is updated to reference the newly built Docker image, ensuring the latest version is deployed.
Once all pipeline stages complete successfully, the Docker image will be available on Docker Hub under himanthakula/ultimate-cicd
, and the Maven build artifacts (JAR file) will be stored locally.
[Please note that Jenkins pipelines can sometimes be sensitive to configuration and environment-specific issues. Don’t be discouraged if it takes multiple runs to get everything working smoothly—it took me nearly 7 attempts to get a successful run!]
[If you encounter any issues or errors during the process, feel free to leave a comment or reach out for support.]
Probably it looks like:
This step replaces image: himanthakula/ultimate-cicd:replaceImageTag
with image: himanthakula/ultimate-cicd:1
in the Kubernetes deployment manifest, which can be verified in the deployment.yml
file within the Git repository.
And the Sonarqube projects dashborad looks like:
With the CI pipeline successfully configured, we can now proceed to implement the Continuous Deployment (CD) workflow using ArgoCD.
Setting Up ArgoCD: With the Docker image built and available, the next step is to configure ArgoCD to manage our application deployments to the Kubernetes cluster.
→ Create the ArgoCD Custom Resource : Begin by defining a basic ArgoCD instance. Create a file named
argocd-basic.yaml
with the following content:apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: {}
→ Apply the ArgoCD Resource to the Cluster : Deploy the ArgoCD instance by applying the YAML file you just created.
kubectl apply -f argocd-basic.yaml
→ Verify ArgoCD Deployment Status : Wait for all ArgoCD pods to enter the Running state. You can monitor the pod status using the following command:
kubectl get pods -n argocd
Ensure that all pods show a
STATUS
of Running before proceeding.Accessing the ArgoCD Dashboard : To access the ArgoCD web interface, begin by retrieving the service details.
kubectl get svc example-argocd-server
→ Expose the ArgoCD Server via NodePort : By default, the ArgoCD server is exposed as a
ClusterIP
service, which isn't accessible outside the cluster. To make it accessible from your local browser, update the service type toNodePort
.kubectl edit svc example-argocd-server -n argocd
In the opened YAML, locate the
spec.type
field and change it from:type: ClusterIP
to:
type: NodePort
Save and close the editor. Kubernetes will update the service, and you’ll be able to access the ArgoCD dashboard using the Minikube IP and assigned NodePort.
→ Access the ArgoCD Dashboard via Minikube : Once the service type has been changed to
NodePort
, you can use Minikube to open a direct link to the ArgoCD dashboard in your default web browser.minikube service example-argocd-server
→ List All Minikube Services : You can also view all exposed services in your Minikube cluster to find the access URL for the ArgoCD server.
minikube service list
In the output above, we can see the URL to access the ArgoCD dashboard.
Open this URL in your browser. You should be greeted with the ArgoCD login page.
→ Logging into the ArgoCD Server: To access the ArgoCD dashboard, use the default login credentials:
Username:
admin
Password: Stored in a Kubernetes secret
To retrieve the admin password, run the following command to view the secret:
kubectl edit secret example-argocd-cluster
In the secret YAML, locate the field:
admin.password: <base64-encoded-password>
Copy the encoded value and decode it using the following command:
echo <base64-encoded-password> | base64 --decode
Copy the decoded password (make sure not to include any trailing
%
), and use it along with the usernameadmin
to log in to the ArgoCD web interface.After logging in, you will be greeted with the ArgoCD home screen, as shown below:
Creating an Application in ArgoCD
Now that ArgoCD is up and running, let’s create a new application to deploy your project onto the Kubernetes cluster. Follow the steps below:
Create a New Application:
In the ArgoCD dashboard, click on the New App button.
For the Application Name, enter
'test'
.In the Project Name, select
default
.Set the Sync Policy to
Automatic
.Under Repository URL, enter:
In the Path field, enter:
[java-maven-sonar-argocd-helm-k8s/spring-boot-app-manifests]
Set Deployment Details:
Navigate to the Destination tab.
In Cluster URL, use the default cluster.
Set the Namespace to
default
.
Create the App:
Click Create App to initiate the deployment of your application onto the Kubernetes cluster. Once created, ArgoCD will begin monitoring and managing the application automatically.
With Automatic Sync enabled, any updates to the container image version or changes made to the manifests in the linked Git repository will be automatically detected and deployed. This ensures your application remains continuously in sync with the latest configuration defined in version control.
Conclusion
We’ve successfully implemented a complete end-to-end CI/CD pipeline using Jenkins, SonarQube, Docker, and ArgoCD. This pipeline automates the entire lifecycle—from building and testing our code, performing static analysis, and pushing a Docker image to Docker Hub, to deploying the application on a Kubernetes cluster via ArgoCD.
This project demonstrates the power of automation and modern DevOps tooling. With this setup, our application delivery process becomes reliable, repeatable, and hands-free—ensuring faster releases and greater confidence in our deployments.
Subscribe to my newsletter
Read articles from Himanth Akula directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Himanth Akula
Himanth Akula
Hi to the fellow tech enthusiasts out there! 👋 I'm an aspiring Cloud and DevOps Engineer ☁️ 🚀 Passionate about creating resilient, secure, and cost-effective infrastructure on AWS. 🐳 Proficient in containerization with Docker and Kubernetes. 🔄 Building expertise in CI/CD pipelines using Jenkins, GitHub Actions, AWS CodePipeline, and more. 📜 Exploring Infrastructure as Code (IaC) with Terraform and configuration management with Ansible. 🌟 Dedicated to continuous learning and sharing knowledge to grow together with the tech community. Let's connect and innovate! 🤝