DevOps End-to-End Project

Table of contents
- π οΈ What I Built
- π‘ Why This Project?
- βοΈ Setting up Ansible (Control Node)
- βοΈ Deploying a Petclinic EC2 instance using Ansible
- π Associate Elastic IP with the EC2 Instance
- βοΈ Jenkins, SonarQube, and Nexus Setup on EC2
- Jenkins Setup:
- SonarQube Setup:
- Nexus Setup
- β Step 1: Create the Nexus Setup Bash Script
- β Step 2: Validate Nexus Setup
- π Step 3: Login to Nexus
- ποΈ Step 4: Create a Nexus Repository
- π Step 5: Add Nexus Credentials in Jenkins
- β° Step 6: Set the TimeStamp for artifact versioning
- βοΈ Step 7: Jenkins Pipeline to Upload Artifact to Nexus
- β Step 7: Confirm Artifact Upload
- Docker CI Setup
- β AWS ECR Setup + Jenkins CI/CD Integration
- π Deploying Spring Petclinic on AWS ECS with Docker and Jenkins
- π£ Setting Up Slack Notification in Jenkins
- π Final Jenkins Pipeline β Complete CI/CD Workflow
- π Jenkins Dashboard post running the job:
- β Wrapping Up
- Connect with me: π€
Hey folks! π I recently wrapped up a hands-on DevOps project that brings together some of the most powerful tools and services in the DevOps world. Whether you're just starting out or brushing up your skills, I hope this post gives you real-world insights into what a modern CI/CD pipeline can look like.
Letβs dive in! π§΅
π οΈ What I Built
This project is all about building a complete automation pipelineβfrom code to container to production. I used Ansible to automate EC2 instance provisioning and environment setup, and deployed the entire CI/CD pipeline on AWS using scalable, production-like components.
Hereβs a quick snapshot of the tools and services involved:
β Ansible β Used for automating AWS EC2 instance creation, SSH key setup, and preparing the server environment for Jenkins and CI/CD tasks
π Elastic IP β For consistent public access to key services
π·ββοΈ Jenkins β Orchestrating the CI/CD pipeline
π SonarQube + Quality Gates β Code quality checks before deployment
π¦ Nexus Repository β Artifact management for WAR/JAR files or Docker images
π³ Docker + AWS ECR β Containerizing the application and pushing images to ECR
π’ AWS ECS β Deploying Docker containers on a scalable cluster
βοΈ Load Balancer β Distributing traffic across ECS tasks for high availability
π Slack Notifications β Instant updates on build and deployment status
π‘ Why This Project?
I wanted to go beyond just setting up Jenkins or pushing Docker images. The goal was to simulate a real-world enterprise DevOps pipelineβwith automated quality checks, version control, containerized deployments, and real-time notifications.
Letβs start working on some cool stuff!!!
βοΈ Setting up Ansible (Control Node)
To manage infrastructure efficiently, weβll begin by setting up Ansible on a control node. This control node will execute playbooks and manage other servers.
π Launch the Control Node (EC2 instance):
Start by creating a new Ubuntu-based EC2 instance; this will serve as your Ansible control node.
π Create and Attach IAM Role
- Create a Custom Policy with the following permissions:
Go to IAM β Policies β Create a new custom policy
Grant it the necessary permissions based on the project's requirements for provisioning. Add the below-defined permissions.
[
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:CreateKeyPair",
"ec2:CreateSecurityGroup",
"ec2:RunInstances",
"ec2:DescribeKeyPairs",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DescribeTags",
"ec2:CreateTags",
"ec2:DescribeSubnets",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstanceAttribute",
"ec2:RevokeSecurityGroupIngress"
]
- Create a new IAM Role and attach the custom policy to it.
- Attach this IAM role to your EC2 instance (control node) from the EC2 dashboard.
π οΈ Install Ansible on the Control Node
Connect to your EC2 instance via SSH. Then run the following commands one by one:
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
This will install the latest stable version of Ansible.
β Verify Installation
Once installed, confirm it by running:
ansible --version
You should see the installed version and config paths.
βοΈ Deploying a Petclinic EC2 instance using Ansible
π Writing Ansible Script
Once the control node is ready, letβs move on to writing the Ansible scripts that will help automate the creation of EC2 instances required for our PetClinic project.
Step 1: Create a Project Directory
SSH into your EC2 Control Node and create a new directory βpetclinic_projectβ:
mkdir petclinic_project
cd petclinic_project
Step 2: Create Configuration & Playbook Files
Weβll create two files:
ansible.cfg
β for Ansible configurationawc_ec2.yaml
β for dynamic inventoryec2-creation.yaml
β main Ansible playbook to create EC2 instances
β‘οΈ Ansible Configuration β ansible.cfg
Create a file named ansible.cfg
and paste the following:
[defaults]
host_key_checking = False
inventory = ./inventory/aws_ec2.yaml
forks = 5
log_path = /var/log/ansible.log
[privilege escalation]
become = True
become_method = sudo
become_ask_pass = False
[inventory]
enable_plugins = aws_ec2
This config sets up Ansible to work with AWS dynamic inventory and allows privilege escalation for tasks. It will,
Points to our dynamic inventory file (i.e., aws_ec2.yaml)
Disables host key checking
Enables privilege escalation
β‘οΈ Dynamic Inventory File β aws_ec2.yaml
Inside the folder inventory/
, create a file aws_ec2.yaml
:
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
filters:
tag:Project: Petclinic # Filters instances with Project=Petclinic
instance-state-name: running
hostnames:
- ip-address
compose:
ansible_user: ubuntu
ansible_host: ip_address
ansible_ssh_private_key_file: /home/ubuntu/petclinic_project/petclinic_key.pem
This file enables Ansible to dynamically list EC2 hosts using tags.
- Install Required Ansible Collections
Installing the amazon.aws
Ansible collection, if itβs not already available:
ansible-galaxy collection install amazon.aws
β‘οΈ Ansible Playbook β ec2-creation.yaml
Create another file named ec2-creation.yaml
and paste the following playbook:
---
- name: EC2 instance setup
hosts: localhost
gather_facts: False
tasks:
- name: Create a new EC2 key pair
# use no_log to avoid private key being displayed into output
amazon.aws.ec2_key:
name: petclinic_key
region: us-east-1
no_log: true
register: petclinic_key_pair
- name: Get VPC details
amazon.aws.ec2_vpc_net_info:
region: us-east-1
register: vpc_info
- name: Create Security Group
amazon.aws.ec2_security_group:
name: petclinic-sg
description: Security group for Peliclinic
vpc_id: "{{vpc_info.vpcs[0].id}}"
region: us-east-1
rules:
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 443
to_port: 443
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 8080
to_port: 8080
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 8081
to_port: 8081
cidr_ip: 0.0.0.0/0
tags:
Name: "PetClinicSecurityGroup"
register: petclinic_security_group
when: vpc_info.vpcs[0].instance_tenancy == "default"
- name: Create an EC2 instance
amazon.aws.ec2_instance:
name: "{{item}}"
key_name: "{{petclinic_key_pair.key.name}}"
instance_type: "{{ 't2.large' if item == 'petclinic_cicd' else 't2.micro' }}"
security_group: "{{petclinic_security_group.group_id}}"
network_interfaces:
- assign_public_ip: true
image_id: ami-0866a3c8686eaeeba
region: us-east-1
exact_count: 1
volumes:
- device_name: /dev/sda1
ebs:
volume_size: "{{ 16 if item == 'petclinic_cicd' else 8 }}"
delete_on_termination: true
tags:
Project: Petclinic
loop:
- petclinic_cicd
Step 3: Run the Ansible Playbook
Finally, execute the playbook to provision your EC2 instances:
ansible-playbook ec2-creation.yaml
This will create the key, security group, and EC2 instance tagged with Project: Petclinic
.
π Once completed, youβll have your EC2 instances ready to serve different components of your PetClinic app!
Step 4: Validate Dynamic Inventory
Once your EC2 instance has been created and properly tagged, it's time to verify that Ansible can dynamically detect and manage it.
- View the Inventory Graph
Run the following command to get a simple hierarchical view of your dynamic inventory:
ansible-inventory --graph
If everything is configured correctly, youβll see output similar to:
@all:
|--@aws_ec2:
| |--tag_Name_petclinic_cicd
This means Ansible has detected your instance and grouped it under aws_ec2
using its tag.
- View Full Inventory Details
To get a complete JSON output of all hosts and their variables:
ansible-inventory -i inventory/aws_ec2.yaml --list
This command will show hostnames, public IPs, SSH config, tags, and moreβhelpful for debugging or confirming instance visibility.
π Targeting Instances in Playbooks
Once the inventory is validated, you can use dynamic host patterns in your playbooks:
- To run tasks on all dynamically discovered EC2 instances:
hosts: aws_ec2
- To run the task on a specific instance:
hosts: tag_Name_petclinic_cicd
Here, tag_Name_petclinic_cicd
corresponds to the EC2 instance where the Name tag is petclinic_cicd
.
π Note: These dynamic hostnames are auto-generated based on your AWS tags and the
filters
you defined inaws_ec2.yaml
.
π Associate Elastic IP with the EC2 Instance
Before proceeding with further setup, itβs a good practice to associate an Elastic IP with the petclinic_cicd
EC2 instance.
Why?
Elastic IP provides a static public IP address
Keeps the IP consistent even if you stop/start the instance
Simplifies SSH access and integration with other services
How to associate an Elastic IP using AWS Console:
Go to the EC2 Dashboard on AWS
Select Elastic IPs from the sidebar
Click Allocate Elastic IP address
Once allocated, select the new Elastic IP and click Actions > Associate Elastic IP address
Choose the
petclinic_cicd
instance and associate
Now, our EC2 instance is associated with the Elastic IP address.
βοΈ Jenkins, SonarQube, and Nexus Setup on EC2
Now that the EC2 instance (petclinic_cicd
) is up and running, itβs time to install and configure Jenkins, SonarQube, and Nexus. Letβs start with Jenkins.
Jenkins Setup:
π SSH into the EC2 Instance
First, SSH into the instance using the private key:
ssh -i petclinic_key.pem ubuntu@<Elastic_IP>
π¦ Jenkins Installation
Create a Bash script jenkins_setup.sh
with the following content:
#!/bin/bash
sudo apt-get update
sudo apt-get install openjdk-17-jdk -y
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins -y
Run the script:
sh jenkins_setup.sh
π‘ Tip: If you face any permission issues, try running with
sudo
or switch to the root user.
β Verify Jenkins Status
Check whether Jenkins is up and running:
systemctl status jenkins
If everything goes correctly, you should see βactive (running)β status for Jenkins.
- Set up the JDK and Maven in the Jenkins Tools:
π Access Jenkins Dashboard
Once Jenkins is installed, access it via the browser:
http://<Elastic_IP>:8080
To unlock Jenkins, get the initial admin password:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Then:
Install the Suggested Plugins
Create a Jenkins Admin user
Proceed to the Jenkins Dashboard
Finally, we are on the Jenkins dashboard, and now itβs time to set up the final plugins and other requirements.
π οΈ Configure JDK and Maven in Jenkins
Go to Jenkins β Manage Jenkins β Global Tool Configuration.
JDK Setup:
- Get the JAVA_HOME path from the location
/usr/lib/jvm/
from the instance petclinic_cicd.
ls /usr/lib/jvm/
Use this path to configure the JDK under Jenkins tools.
Maven Setup:
- Set up Maven similarly in the Maven section.
π§± Jenkins Pipeline Tool Section
Hereβs how weβll reference JDK and Maven in the Jenkins pipeline:
pipeline {
agent any
tools {
jdk "OpenJDK17"
maven "Maven3"
}
}
π Install Required Jenkins Plugins
Install the following plugins before setting up Nexus and SonarQube:
Nexus Artifact Uploader
SonarQube Scanner
Build Timestamp
Pipeline Maven Integration
Pipeline Utility Steps
Blue Ocean
π§ More plugins can be added later based on project needs.
SonarQube Setup:
To automate and analyze the code quality, we'll install and configure SonarQube on the same EC2 instance used for Jenkins (petclinic_cicd
).
π οΈ Step 1: Create a Bash Script for Installation
Create a file named sonarqube_setup.sh
and add the following code to automate the SonarQube setup:
#!/bin/bash
cp /etc/sysctl.conf /root/sysctl.conf_backup
cat <<EOT> /etc/sysctl.conf
vm.max_map_count=262144
fs.file-max=65536
ulimit -n 65536
ulimit -u 4096
EOT
cp /etc/security/limits.conf /root/sec_limit.conf_backup
cat <<EOT> /etc/security/limits.conf
sonarqube - nofile 65536
sonarqube - nproc 409
EOT
sudo apt-get update -y
sudo apt-get install openjdk-17-jdk -y
sudo update-alternatives --config java
java -version
sudo apt update
wget -q https://www.postgresql.org/media/keys/ACCC4CF8.asc -O - | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt install postgresql postgresql-contrib -y
#sudo -u postgres psql -c "SELECT version();"
sudo systemctl enable postgresql.service
sudo systemctl start postgresql.service
sudo echo "postgres:admin123" | chpasswd
runuser -l postgres -c "createuser sonar"
sudo -i -u postgres psql -c "ALTER USER sonar WITH ENCRYPTED PASSWORD 'admin123';"
sudo -i -u postgres psql -c "CREATE DATABASE sonarqube OWNER sonar;"
sudo -i -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonar;"
systemctl restart postgresql
#systemctl status -l postgresql
netstat -tulpena | grep postgres
sudo mkdir -p /sonarqube/
cd /sonarqube/
sudo curl -O https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-24.12.0.100206.zip
sudo apt-get install zip -y
sudo unzip -o sonarqube-24.12.0.100206.zip -d /opt/
sudo mv /opt/sonarqube-24.12.0.100206/ /opt/sonarqube
sudo groupadd sonar
sudo useradd -c "SonarQube - User" -d /opt/sonarqube/ -g sonar sonar
sudo chown sonar:sonar /opt/sonarqube/ -R
cp /opt/sonarqube/conf/sonar.properties /root/sonar.properties_backup
cat <<EOT> /opt/sonarqube/conf/sonar.properties
sonar.jdbc.username=sonar
sonar.jdbc.password=admin123
sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube
sonar.web.host=0.0.0.0
sonar.web.port=9000
sonar.web.javaAdditionalOpts=-server
sonar.search.javaOpts=-Xmx512m -Xms512m -XX:+HeapDumpOnOutOfMemoryError
sonar.log.level=INFO
sonar.path.logs=logs
EOT
cat <<EOT> /etc/systemd/system/sonarqube.service
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=sonar
Group=sonar
Restart=always
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
EOT
systemctl daemon-reload
systemctl enable sonarqube.service
#systemctl start sonarqube.service
#systemctl status -l sonarqube.service
apt-get install nginx -y
rm -rf /etc/nginx/sites-enabled/default
rm -rf /etc/nginx/sites-available/default
cat <<EOT> /etc/nginx/sites-available/sonarqube
server{
listen 80;
server_name sonarqube.groophy.in;
access_log /var/log/nginx/sonar.access.log;
error_log /var/log/nginx/sonar.error.log;
proxy_buffers 16 64k;
proxy_buffer_size 128k;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
EOT
ln -s /etc/nginx/sites-available/sonarqube /etc/nginx/sites-enabled/sonarqube
systemctl enable nginx.service
#systemctl restart nginx.service
sudo ufw allow 80,9000,9001/tcp
echo "System reboot in 30 sec"
sleep 30
reboot
Run the script:
sh sonarqube_setup.sh
β Post-Installation Verification
Confirm SonarQube is running:
systemctl status sonarqube
If everything goes correctly, you should see βactive (running)β status for SonarQube.
Access the web UI at:
http://<your_public_ip>:80
orhttp://<your_public_ip>
Remember, here the public IP will be of the βpetclinic_cicdβ instance because SonarQube is installed on it.
Login credentials:
Username:
admin
Password:
admin
(Change after first login)
Note: We use port 80 (HTTP Port) because NGINX is reverse-proxying port 9000.
Finally, we are on the SonarQube dashboard.
π Connect SonarQube with Jenkins
1. Generate SonarQube Token
Go to Administration > Security
Generate a new access token.
2. Configure SonarQube in Jenkins
Navigate to: Jenkins > Manage Jenkins > Configure System
Add SonarQube server details:
Name:
sonar-server
Server URL:
http://<your_public_ip>
Token: Paste the generated token in the Jenkins Credentials Provider of type Secret Text
Click on +Add
to create a Secret Text in the Jenkins Credentials for the SonarQube token.
3. Add SonarQube Scanner in Jenkins
Go to:
Manage Jenkins > Global Tool Configuration
Add a new SonarQube Scanner tool:
sonar-scanner
Note: The option for SonarQube Scanner installation appears because of the plugin SonarQube Scanner.
4. Create a Webhook in SonarQube
Navigate to:
Administration > Configuration > Webhooks
Name:
jenkins-webhook
URL:
http://<jenkins_public_ip>:8080/sonarqube-webhook
This webhook ensures that Jenkins receives the Quality Gate status once SonarQube finishes analysis.
For URL, get the public IP address of βpetclinic_cicdβ instance with port number 8080 because Jenkins is running at this port, and the route will be β/sonarqube-webhookβ.
We have now successfully set up the webhook.
π§ͺ Create and Link Quality Gates
1. Create a local project
Before proceeding with the Quality Gates, letβs first create a local project and then link the Quality Gates to this project.
2. Navigate to the Quality Gates
tab and create a new QG
2. Add rules for:
Include the following parameters in the Quality Gate for New and Overall code.
New Code
Overall Code:
Now, we have finally completed the Quality Gate.
3. Link the gate to your project
Once the project is created, we need to link the Quality Gate to the project
Click on the created Project βpetclinic-cicdβ
Go to Project Settings > Quality Gate > Specify Quality Gate
π Jenkins Pipeline for SonarQube Analysis
Letβs write the Jenkins pipeline stage for SonarQube analysis and Quality Gate check.
pipeline {
agent any
tools {
jdk "OpenJDK17"
maven "Maven3"
}
environment {
// Sonar env variables
SONAR_SERVER = "sonar-server"
SONAR_SCANNER = "sonar-scanner"
SONAR_PROJECT_KEY = "petclinic-ccd"
SONAR_PROJECT_NAME = "petclinic-cicd"
SONAR_PROJECT_VERSION = "v1"
}
stages {
stage('Sonar Analysis') {
environment {
SONAR_SCANNER = tool "sonar-scanner"
}
steps {
withSonarQubeEnv("${SONAR_SERVER}") {
sh '''
${SONAR_SCANNER}/bin/sonar-scanner \
-Dsonar.projectKey=${SONAR_PROJECT_KEY} \
-Dsonar.projectName=${SONAR_PROJECT_NAME} \
-Dsonar.projectVersion=${SONAR_PROJECT_VERSION} \
-Dsonar.sources=src/main/java \
-Dsonar.tests=src/test/java \
-Dsonar.junit.reportPaths=target/surefire-reports \
-Dsonar.java.binaries=target/classes \
-Dsonar.jacoco.reportPaths=target/jacoco.exec \
-Dsonar.checkstyle.reportPaths=target/checkstyle-result.xml
'''
}
}
}
stage('SonarQube Quality Gate') {
steps {
timeout(time: 1, unit: 'HOURS') {
// waitForQualityGate abortPipeline: true
script {
def gateStatus = waitForQualityGate()
if (gateStatus.status != 'OK') {
error 'Pipeline aborted due to quality gate failure: ${gateStatus.status}'
}
}
}
}
}
}
}
π§Ύ SonarQube result: After running the Pipeline code:
Nexus Setup
β Step 1: Create the Nexus Setup Bash Script
File: nexus_setup.sh
#!/bin/bash
# Install OpenJDK 8 and wget (use openjdk-8-jdk on Ubuntu)
sudo apt-get update -y
sudo apt-get install openjdk-17-jdk wget -y
# Create directories for Nexus
sudo mkdir -p /opt/nexus/
sudo mkdir -p /tmp/nexus/
# Navigate to temporary directory for Nexus download
cd /tmp/nexus/
# Download Nexus
NEXUSURL="https://download.sonatype.com/nexus/3/latest-unix.tar.gz"
wget $NEXUSURL -O nexus.tar.gz
# Wait for download to complete and extract Nexus
sleep 10
EXTOUT=$(tar xzvf nexus.tar.gz)
# Get the directory name from the extraction path
NEXUSDIR=$(echo $EXTOUT | cut -d '/' -f1)
# Clean up the downloaded tar.gz file
sleep 5
sudo rm -rf /tmp/nexus/nexus.tar.gz
# Copy Nexus files to /opt/nexus/
sudo cp -r /tmp/nexus/* /opt/nexus/
# Wait a moment before moving on
sleep 5
# Add a system user 'nexus' to run the Nexus service
sudo useradd nexus
# Change the ownership of Nexus files to the nexus user
sudo chown -R nexus:nexus /opt/nexus
# Create the Nexus systemd service file
cat <<EOT | sudo tee /etc/systemd/system/nexus.service
[Unit]
Description=nexus service
After=network.target
[Service]
Type=forking
LimitNOFILE=65536
ExecStart=/opt/nexus/$NEXUSDIR/bin/nexus start
ExecStop=/opt/nexus/$NEXUSDIR/bin/nexus stop
User=nexus
Restart=on-abort
[Install]
WantedBy=multi-user.target
EOT
# Configure Nexus to run as the 'nexus' user
echo 'run_as_user="nexus"' | sudo tee /opt/nexus/$NEXUSDIR/bin/nexus.rc
# Reload systemd to read the new service file
sudo systemctl daemon-reload
# Start and enable Nexus service to run on boot
sudo systemctl start nexus
sudo systemctl enable nexus
Run it:
sh nexus_setup.sh
β Step 2: Validate Nexus Setup
Check the service status:
sudo systemctl status nexus
Open Nexus in your browser: http://<public-ip>:8081
Note: In our case the Public IP address is of petclinic_cicd instance because we are doing Nexus setup in the same instance. Also, default port number for Nexus is 8081.
Now, we are able to access the Nexus successfully.
π Step 3: Login to Nexus
Get the default admin password:
sudo cat /opt/nexus/sonatype-work/nexus3/admin.password
Login using
admin
and the above password, then set a new password.
ποΈ Step 4: Create a Nexus Repository
Go to Settings > Repositories > Create repository.
Choose Maven2 (hosted).
Set:
Name:
petclinic-artifact
Deployment policy:
Allow redeploy
Save it.
The repository name βpetclinic-artifactβ will be used in the Jenkins pipeline.
π Step 5: Add Nexus Credentials in Jenkins
Navigate to: Jenkins > Manage Jenkins > Credentials > Global > Add Credentials.
Choose:
Kind: Username with password
ID:
nexus-creds
Username: Nexus admin
Password: Nexus password
β° Step 6: Set the TimeStamp for artifact versioning
This will save the artifact with the mentioned timestamp in the name.
βοΈ Step 7: Jenkins Pipeline to Upload Artifact to Nexus
pipeline {
agent any
tools {
jdk "OpenJDK17"
maven "Maven3"
}
stages {
stage('Upload artifact to Nexus') {
steps {
script {
def pom = readMavenPom file: 'pom.xml'
def groupId = pom.groupId
def packaging = pom.packaging
def version = pom.version
nexusArtifactUploader(
nexusVersion: 'nexus3',
protocol: 'http',
nexusUrl: '<ip_address>:8081',
groupId: groupId,
version: "${version}_ID${env.BUILD_ID}_D${env.BUILD_TIMESTAMP}",
repository: 'petclinic-artifact',
credentialsId: 'nexus-creds',
artifacts: [
[
artifactId: 'petclinic',
classifier: '',
file: "target/petclinic.${packaging}",
type: packaging
]
]
)
}
}
}
}
}
β Step 7: Confirm Artifact Upload
Once the pipeline runs:
Go to Nexus
Navigate to Browse > petclinic-artifact
You will see the artifact uploaded with versioning based on Jenkins build ID and timestamp.
Docker CI Setup
β
Step 1: Install Docker on petclinic_cicd
Instance
Create the bash script docker_install.sh
:
#!/bin/bash
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl -y
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Installing latest docker version
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Run the script:
sh docker_install.sh
β Step 2: Validate Docker Installation
Run the following:
docker --version
Then test with a container:
sudo docker run hello-world
β
You should see:Hello from Docker!
β This confirms Docker is working correctly.
π€ Step 3: Allow Jenkins to Use Docker
Add jenkins
user to the Docker group:
sudo usermod -aG docker jenkins
Then switch to the jenkins
user:
su - jenkins
Check Docker access:
docker images
If it returns the list or an empty list without permission errors, Docker is now usable by Jenkins.
π Step 4: Create IAM User for ECS & ECR
In the AWS Console:
Go to IAM > Users > Add users:
Username:
Jenkins-ECR
Access type: Programmatic access
Permissions:
Attach policies directly:
AmazonEC2ContainerRegistryFullAccess
AmazonECS_FullAccess
- Complete the setup and download the
.csv
file containing:
Access Key ID
Secret Access Key
This .csv
will be used to:
Authenticate Jenkins to push Docker images to ECR
Deploy Docker containers to ECS
β AWS ECR Setup + Jenkins CI/CD Integration
Step 1: Create an ECR Repository
Go to AWS Console > ECR (Elastic Container Registry)
Click "Create repository"
Enter repository name:
petclinic/petclinic-repo
- Note:
petclinic
is the namespace (optional),petclinic-repo
is the repository.
- Note:
Choose settings (private, default settings are fine)
Click Create
β
Youβll get a repository URI like (there will be a 12-digit number instead of <AWS_ID>) :<AWS_ID>.dkr.ecr.us-east-1.amazonaws.com/petclinic/petclinic-repo
Notice that in the URI we have βpetclinic/petclinic-repoβ. Here, βpetclinicβ is the namespace and βpetclinic-repoβ is the repository name. We can have multiple repositories under the same namespace.
Step 2: Install Jenkins Plugins
Install these via Manage Jenkins > Plugins > Available:
Amazon ECR
Docker Pipeline
CloudBees Docker Build and Publish
Amazon Web Services SDK :: All
Step 3: Add AWS IAM Credentials in Jenkins
Go to Manage Jenkins > Credentials > System > Global credentials
Click βAdd Credentialsβ
Choose:
Kind:
Amazon Web Services Credentials
Access Key ID: (From IAM
.csv
)Secret Access Key: (From IAM
.csv
)ID:
iam-jenkins-ecr
Save
The ID βiam-jenkins-ecrβ will be used in the pipeline code to access ECS and ECR services in the format
ecr:region:credential-id
.
Step 4: Add Dockerfile to Your Project Root
Dockerfile
content:
FROM tomcat:9.0.98-jdk17
EXPOSE 8080
RUN rm -rf /usr/local/tomcat/webapps/*
COPY target/petclinic.war /usr/local/tomcat/webapps/ROOT.war
ENTRYPOINT ["catalina.sh", "run"]
Ensure this Dockerfile
is placed in your root directory (next to pom.xml
).
Step 5: Jenkins Pipeline to Build & Push to ECR
Here is the complete Jenkins pipeline code that builds the Docker image from the Dockerfile and uploads the built image to AWS ECR.
def petclinicImage // for storing built docker image
pipeline {
agent any
tools {
jdk "OpenJDK17"
maven "Maven3"
}
environment {
// AWS ECR env variables
REGISTRY_CREDENTIAL = "ecr:us-east-1:iam-jenkins-ecr"
APP_REGISTRY = "<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/petclinic/petclinic-repo"
REGISTRY_URL = "https://<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com"
}
stages {
stage('Build project docker image') {
steps {
script {
petclinicImage = docker.build("${APP_REGISTRY}:${env.BUILD_ID}", '.')
}
}
}
stage('Uploading docker image to registry') {
steps {
script {
docker.withRegistry(REGISTRY_URL, REGISTRY_CREDENTIAL) {
// push image
petclinicImage.push("${env.BUILD_NUMBER}")
petclinicImage.push("latest")
}
}
}
}
}
}
Note: The actual AWS account ID is masked in the APP_REGISTRY and REGISTRY_URL for privacy concerns here. Replace
<AWS_ACCOUNT_ID>
with your actual 12-digit account ID.
β Final Check: After Running the Pipeline
Running the pipeline should allow us to successfully build the Docker image and upload it to the AWS ECR.
Visit AWS Console > ECR > petclinic/petclinic-repo
You should see 2 image tags:
One with the build number
One with latest
Using both tags ensures traceability with
build numbers
and a consistent reference usinglatest
.
π Deploying Spring Petclinic on AWS ECS with Docker and Jenkins
Once the Docker image is built and pushed to Amazon ECR, the next step is to deploy the container on AWS using Elastic Container Service (ECS). ECS is a fully managed container orchestration service that helps you run Docker containers without managing your own infrastructure.
We'll follow a structured approach and explain the purpose of each component:
1. Creating an ECS Cluster
An ECS Cluster is a logical grouping of resources (EC2 instances in EC2 mode or Fargate tasks). It acts as the environment where ECS tasks and services are deployed. Multiple services and tasks can run within the same cluster, sharing its compute resources.
Since we are using Fargate, there's no need to provision EC2 instances manually.
π Navigate to the ECS service β Create a cluster
Select Fargate as the launch type
Provide a cluster name (e.g.,
petclinic-cluster-ecs
)Leave other defaults unless you require custom networking
Now, click on the Create button, and our ECS cluster will be created in a few minutes.
π Why?
This cluster is where all your tasks (containers) will run. Think of it as your container execution environment.
2. Creating a Task Definition
A Task Definition is a blueprint for your container that defines how containers are launched in ECS. It specifies runtime configurations for containers like:
What image to use
CPU and memory requirements
Ports to expose
IAM roles, and more
π Letβs start creating a Task Definition:
First, get the URI of the AWS registry from ECR
Go to Task Definitions β Create new
Give the required information for the input fields in the Task Definition.
Choose launch type: FARGATE
Task name:
petclinic-task-define
Task role: Use the default
ecsTaskExecutionRole
(If not showing, then selectCreate a new role
)
Add container:
Name:
petclinic
Image: Paste your ECR image URI (e.g.,
<aws_id>.dkr.ecr.us-east-1.amazonaws.com/petclinic/petclinic-repo:latest
)Port:
8080
Note that the Image URI in the Container-1 section is
<aws_id>.dkr.ecr.us-east-1.amazonaws.com/petclinic/petclinic-repo:latest
which will get the latest image from the ECR. By default, the latest images will be used but we can explcitly mentioned:latest
and:prod
.
Finally, our Task Definition is created.
π Why?
This tells ECS how to run your Docker container. Without this definition, ECS doesnβt know which image to run, how much memory to allocate, or what ports to expose.
3. Adding CloudWatch Permissions to ECS Role
By default, the ecsTaskExecutionRole
does not include permission to write logs to CloudWatch. We need to add the permission because ECS does not have any consent or access related to CloudWatch logs for the container.
π Navigate to IAM > Roles > ecsTaskExecutionRole
Attach the policy:
CloudWatchFullAccess
π Why?
Without this, your ECS task will fail to start due to logging configuration errors. This policy allows ECS to write logs from your container to CloudWatch.
4. Configuring Security Groups
You need to configure two security groups:
For ECS tasks (Fargate)
For the Load Balancer
ECS Security Group
- Inbound Rule: Allow TCP on port 8080 from the Load Balancer's security group
To cross-check the fact, go to the Task Definition revision and check the containerPort value in the JSON.
π Why?
The Docker image exposes the app on port 8080. If this isn't allowed, the container wonβt receive traffic from the load balancer.
Load Balancer Security Group
- Inbound Rule: Allow TCP on port 80 from Anywhere (0.0.0.0/0)
π Why?
This allows users to access your application via HTTP in a browser.
5. Creating a Target Group
Before setting up a load balancer, you must create a Target Group, which will route incoming traffic to the running containers.
π Go to EC2 > Target Groups β Create
Target Type: IP addresses (since Fargate uses dynamic IPs)
Port: 8080
Protocol: HTTP
Register targets
The port for the target group is 8080 because the Docker image has exposed port 8080. Hence, the running container will receive the request at the same port.
The Target Group has been created successfully.
π Why?
A target group defines where the traffic should go. In our case, to IP-based containers that expose port 8080.
6. Setting Up the Application Load Balancer (ALB)
The ALB receives HTTP requests and forwards them to the ECS container through the Target Group.
π Go to EC2 > Load Balancers β Create
Type: Application Load Balancer
Listener: Port 80
Attach to the earlier created target group
peteclinic-elb-tg
Assign a security group that allows traffic on port 80
Finally, the load balancer is created
π Why?
ALBs are required in Fargate deployments for exposing services to the internet. They handle traffic routing and support auto-scaling and health checks.
7. Creating the ECS Service
The Service ensures that the specified number of task instances (containers) are always running. Itβs also responsible for associating the task with the Load Balancer.
π ECS > Clusters > petclinic-cluster-ecs
β Create Service
Launch Type: Fargate
Task Definition:
petclinic-task
Number of tasks:
1
Load Balancer: Select the one you created
Target Group: Select the target group you created earlier
Now, sit back and wait because ECS service creation takes a few minutes to complete. Meanwhile, letβs have some coffee β
We have successfully deployed our application to AWS ECS.
π Why?
The service ties everything together: the container image, the number of replicas, and how traffic is routed. It also handles container restart and auto-healing if a task crashes.
8. Accessing the Application
Once ECS has deployed the task:
Go to the ECS Service β Task β Configuration and Networking
Copy the Load Balancer DNS
Open in browser:
http://<load-balancer-dns>
ππ Deployment successful! We're now able to access the Petclinic Application.
πΉ Home Page:
πΉ Search Owners Page
πΉ Owners list Page
πΉ Adding New Owner Page
πΉ Owner Information Page
πΉ Veterinarian List Page
π£ Setting Up Slack Notification in Jenkins
In a CI/CD pipeline, itβs important to stay informed about the status of your builds without constantly monitoring Jenkins manually. This is where Slack Notifications come in. Slack allows Jenkins to post real-time updates (Success, Failure, or Unstable builds) directly to a specified channel, keeping your team aligned and responsive.
Weβll now walk through the process of integrating Slack notifications into our Jenkins pipeline.
Step 1: Create a Slack Workspace and Channel
Go to Slack and create a workspace if you don't already have one.
For example, we created a workspace calledpetclinic_cicd
.Inside this workspace, create a channel
#petclinic-cicd
where Jenkins will send notifications.
π Why?
The Slack channel will act as a central communication point where team members can track build results and take action if something breaks.
Step 2: Install Jenkins CI App in Slack
Open Google and search for βSlack App Directory Jenkins CIβ or directly visit the Slack Jenkins CI App.
Click Add to Slack and authorize the app for your workspace.
Choose the channel (e.g.,
#petclinic-cicd
) where notifications will be posted.Once the app is installed, Slack will generate an authentication token (Slack Integration Token).
π Note: This token is used to authenticate Jenkins with your Slack workspace.
Step 3: Configure Slack in Jenkins
In Jenkins, go to Manage Jenkins β Manage Plugins and install the plugin:
β Slack NotificationAfter installation, navigate to:
Manage Jenkins β Configure SystemScroll to the Slack section and provide:
Workspace: Slack workspace domain (e.g.,
petcliniccicd.slack.com
)Channel:
#petclinic-cicd
Integration Token Credential ID:
πΈ Click βAddβ β Jenkins, choose βSecret textβ
πΈ Paste the Slack integration token you got from Slack
πΈ Give it a recognizable ID likeslack-token
πΈ Select that token from the dropdown list
- Click "Test Connection" to verify if Jenkins can send messages to Slack.
π Why?
This integration allows Jenkins to communicate with Slack securely and ensures notifications are sent to the correct workspace and channel.
Step 4: Add Slack Notification to Jenkins Pipeline
Below is a sample Jenkinsfile
snippet that sends a Slack message after the pipeline completes:
def COLOR_MAP = [
"SUCCESS": "good",
"UNSTABLE": "warning",
"FAILURE": "danger"
]
pipeline {
agent any
tools {
jdk "OpenJDK17"
maven "Maven3"
}
environment {
// Slack workspace
SLACK_CHANNEL = "#petclinic-cicd"
}
stages {
// ALL THE CICD STAGES
}
post {
always {
slackSend(
channel: "${SLACK_CHANNEL}",
color: COLOR_MAP[currentBuild.currentResult],
message: "Pipeline ${currentBuild.currentResult} for job '${env.JOB_NAME}' having build ID - ${env.BUILD_ID}. \nCheck out for more information: ${env.BUILD_URL}")
}
}
}
π Why?
The post
block ensures the Slack message is always sentβwhether the pipeline succeeded or failed. The COLOR_MAP
dynamically changes the message color (green, yellow, red) based on the result.
β Final Outcome
Once this is in place and you run the Jenkins job, youβll start receiving real-time build status messages like:
β SUCCESS: Pipeline completed successfully
β οΈ UNSTABLE: Some warnings or tests failed
β FAILURE: Something broke in the pipeline
These updates will appear in your Slack channel, improving visibility and team response times.
π Final Jenkins Pipeline β Complete CI/CD Workflow
Now that weβve explored and understood each individual stage of our CI/CD process β from fetching the source code to uploading Docker images and sending Slack notifications β here is the complete Jenkins pipeline that brings everything together.
This pipeline handles:
Code checkout from GitHub
Build and artifact creation
Static code analysis using Checkstyle
Code quality analysis with SonarQube
Quality gate enforcement
Artifact upload to the Nexus repository
Docker image creation and push to AWS ECR
Slack notifications upon build completion
def COLOR_MAP = [
"SUCCESS": "good",
"UNSTABLE": "warning",
"FAILURE": "danger"
]
def petclinicImage // for storing built docker image
pipeline {
agent any
tools {
jdk "OpenJDK17"
maven "Maven3"
}
environment {
// Sonar env variables
SONAR_SERVER = "sonar-server"
SONAR_SCANNER = "sonar-scanner"
SONAR_PROJECT_KEY = "petclinic-ccd"
SONAR_PROJECT_NAME = "petclinic-cicd"
SONAR_PROJECT_VERSION = "v1"
// Slack workspace
SLACK_CHANNEL = "#petclinic-cicd"
// AWS ECR env variables
REGISTRY_CREDENTIAL = "ecr:us-east-1:iam-jenkins-ecr"
APP_REGISTRY = "<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/petclinic/petclinic-repo"
REGISTRY_URL = "https://<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com"
}
stages {
stage('Fetch github source code') {
steps {
git branch: 'main', url: 'https://github.com/jaiswaladi246/Petclinic.git'
}
}
stage('Build the artifact') {
steps {
sh 'mvn clean install'
}
post {
success {
archiveArtifacts artifacts: '**/*.war'
echo 'Successfully archived the artifact!!!'
}
failure {
echo 'Failed to archive the artifact...'
}
}
}
stage('Checkstyle analysis') {
steps {
sh 'mvn checkstyle:checkstyle'
}
}
stage('Sonar Analysis') {
environment {
SONAR_SCANNER = tool "sonar-scanner"
}
steps {
withSonarQubeEnv("${SONAR_SERVER}") {
sh '''
${SONAR_SCANNER}/bin/sonar-scanner \
-Dsonar.projectKey=${SONAR_PROJECT_KEY} \
-Dsonar.projectName=${SONAR_PROJECT_NAME} \
-Dsonar.projectVersion=${SONAR_PROJECT_VERSION} \
-Dsonar.sources=src/main/java \
-Dsonar.tests=src/test/java \
-Dsonar.junit.reportPaths=target/surefire-reports \
-Dsonar.java.binaries=target/classes \
-Dsonar.jacoco.reportPaths=target/jacoco.exec \
-Dsonar.checkstyle.reportPaths=target/checkstyle-result.xml
'''
}
}
}
stage('SonarQube Quality Gate') {
steps {
timeout(time: 1, unit: 'HOURS') {
// waitForQualityGate abortPipeline: true
script {
def gateStatus = waitForQualityGate()
if (gateStatus.status != 'OK') {
error 'Pipeline aborted due to quality gate failure: ${gateStatus.status}'
}
}
}
}
}
stage('Upload artifact to Nexus') {
steps {
script {
def pom = readMavenPom file: 'pom.xml'
def groupId = pom.groupId
def packaging = pom.packaging
def version = pom.version
nexusArtifactUploader(
nexusVersion: 'nexus3',
protocol: 'http',
nexusUrl: '34.207.174.152:8081',
groupId: groupId,
version: "${version}_ID${env.BUILD_ID}_D${env.BUILD_TIMESTAMP}",
repository: 'petclinic-artifact',
credentialsId: 'nexus-creds',
artifacts: [
[
artifactId: 'petclinic',
classifier: '',
file: "target/petclinic.${packaging}",
type: packaging
]
]
)
}
}
}
stage('Build project docker image') {
steps {
script {
petclinicImage = docker.build("${APP_REGISTRY}:${env.BUILD_ID}", '.')
}
}
}
stage('Uploading docker image to registry') {
steps {
script {
docker.withRegistry(REGISTRY_URL, REGISTRY_CREDENTIAL) {
// push image
petclinicImage.push("${env.BUILD_NUMBER}")
petclinicImage.push("latest")
}
}
}
}
}
post {
always {
slackSend(
channel: "${SLACK_CHANNEL}",
color: COLOR_MAP[currentBuild.currentResult],
message: "Pipeline ${currentBuild.currentResult} for job '${env.JOB_NAME}' having build ID - ${env.BUILD_ID}. \nCheck out for more information: ${env.BUILD_URL}")
}
}
}
π Note: For security reasons, the actual AWS account ID has been masked in the ECR URL. Replace
<AWS_ACCOUNT_ID>
with your actual 12-digit account ID.
π Jenkins Dashboard post running the job:
β Wrapping Up
This pipeline automates your entire DevOps flow β from coding to production-ready image β all while maintaining quality and providing real-time notifications. By integrating SonarQube, Checkstyle, Nexus, AWS ECR, and Slack, weβve built a production-grade CI/CD pipeline for the Spring PetClinic project.
Feel free to customize this pipeline to fit your application and infrastructure needs. π
Connect with me: π€
LinkedIn: linkedin.com/in/ritik-saxena
Subscribe to my newsletter
Read articles from Ritik Saxena directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ritik Saxena
Ritik Saxena
Experienced QA Engineer with a proven track record in Automation and Functional Testing for Android, iOS, and Web Applications. Over the past 2 years, I have contributed to the success of diverse projects by ensuring the delivery of high-quality software products. Currently, I am actively engaged in learning and implementing DevOps practices proficiently and professionally.