Building a Three-Tier Blogging App with DevSecOps: The WanderLust Mega Project
Table of contents
- Prerequisites
- Key Points
- Setting Up the Environment
- Setup the Jenkins
- Jenkins Shared Library
- How to create and use shared library in Jenkins.
- Configure SonarQube
- Configure email:
- Configure OWASP:
- Integrate SonarQube in Jenkins.
- Generate docker Token and update in Jenkins.
- Set docker cred in Jenkins
- Configure the ArgoCD.
- Configure the repositories in Argocd
- For CI Pipeline
- Troubleshooting while running CI Pipeline
- For CD Pipeline
- Connect wonderlast cluster to ArgoCD.
- Deploy application through argocd.
- Verify application.
- Status in Sonarqube (Code quality and detect potential vulnerabilities)
- Image status in DockerHub
- Configure observability (Monitoring)
- Email Notification for successful deployment:
- Resources used in AWS:
- Environment Cleanup:
- Key Takeaways
- What to Avoid
- Key Benefits
- Conclusion
"Wanderlust is a travel blog web application developed using the MERN stack (MongoDB, Express.js, React, and Node.js)
. This project fosters open-source contributions, enhances React development skills, and provides hands-on experience with Git.
Prerequisites
Before diving into this project, here are some skills and tools you should be familiar with:
[x] Clone repository for terraform code
Note: Replace resource names and variables as per your requirement in terraform code[x] App Repo
[x] Git and GitHub: You'll need to know the basics of Git for version control and GitHub for managing your repository.
[x] MERN Stack (MongoDB, Express, React, Node.js): A solid understanding of React for front-end development and how it integrates with MongoDB, Express, and Node.js is essential.
[x] Docker: Familiarity with containerization using Docker to package the application and its dependencies.
[x] Jenkins: Understanding continuous integration (CI) and how to set up Jenkins to automate the build and test processes.
[x] Kubernetes (AWS EKS): Some experience deploying and managing containerized applications using Kubernetes, especially with Amazon EKS.
[x] Helm: Helm charts knowledge is required for deploying applications on Kubernetes, particularly for monitoring with tools like Prometheus and Grafana.
[x] Security Tools: OWASP Dependency Check for identifying vulnerabilities, SonarQube for code quality analysis, and Trivy for scanning Docker images.
[x] ArgoCD: Familiarity with ArgoCD for continuous delivery (CD) to manage the Kubernetes application deployment.
[x] Redis: Basic knowledge of Redis for caching to improve application performance.
Key Points
GitHub – for code version control and collaboration
Docker – for containerizing applications
Jenkins – for continuous integration (CI)
OWASP Dependency-Check – for identifying vulnerable dependencies
SonarQube – for code quality and security analysis
Trivy – for filesystem scanning and security checks
ArgoCD – for continuous deployment (CD)
Redis – for caching services
AWS EKS – for managing Kubernetes clusters
Helm – for managing monitoring tools like Prometheus and Grafana
Setting Up the Environment
I have created a Terraform code to set up the entire environment, including the installation of required applications, and tools, and the EKS cluster automatically created.
Note ⇒ EKS cluster creation will take approx. 10 to 15 minutes.
⇒ Two EC2 machines will be created named as "Jenkins Server & Agent"
⇒ Docker Install
⇒ Trivy Install
⇒ Helm Install
⇒ SonarQube install as in a container
⇒ ArgoCD
⇒ EKS Cluster Setup
⇒ Prometheus install using Helm
⇒ Grafana install using Helm
Setting Up the Virtual Machines (EC2)
First, we'll create the necessary virtual machines using terraform
.
Below is a terraform configuration:
Once you clone repo then go to folder "13.Real-Time-DevOps-Project/Terraform_Code/Code_IAC_Terraform_box" and run the terraform command.
cd Terraform_Code/Code_IAC_Terraform_box
$ ls -l
da---l 29/09/24 12:02 PM k8s_setup_file
-a---l 29/09/24 10:44 AM 507 .gitignore
-a---l 01/10/24 10:50 AM 3771 agent_install.sh
-a---l 01/10/24 10:59 AM 8149 main.tf
-a---l 16/07/21 4:53 PM 1696 MYLABKEY.pem
-a---l 25/07/24 9:16 PM 239 provider.tf
-a---l 01/10/24 11:26 AM 10257 terrabox_install.sh
Note ⇒ Make sure to run main.tf
from inside the folders.
13.Real-Time-DevOps-Project/Terraform_Code/Code_IAC_Terraform_box/
da---l 29/09/24 12:02 PM k8s_setup_file
-a---l 29/09/24 10:44 AM 507 .gitignore
-a---l 01/10/24 10:50 AM 3771 agent_install.sh
-a---l 01/10/24 10:59 AM 8149 main.tf
-a---l 16/07/21 4:53 PM 1696 MYLABKEY.pem
-a---l 25/07/24 9:16 PM 239 provider.tf
-a---l 01/10/24 11:26 AM 10257 terrabox_install.sh
You need to run main.tf
the file using the following terraform command.
Now, run the following command.
terraform init
terraform fmt
terraform validate
terraform plan
terraform apply
# Optional <terraform apply --auto-approve>
Once you run the terraform command, then we will verify the following things to make sure everything is set up via a terraform.
Inspect the Cloud-Init
logs:
Once connected to the EC2 instance then you can check the status of the user_data
script by inspecting the log files.
# Primary log file for cloud-init
sudo tail -f /var/log/cloud-init-output.log
If the user_data script runs successfully, you will see output logs and any errors encountered during execution.
If there’s an error, this log will provide clues about what failed.
Outcome of "cloud-init-output.log
"
Verify the Installation
- [x] Docker version
ubuntu@ip-172-31-95-197:~$ docker --version
Docker version 24.0.7, build 24.0.7-0ubuntu4.1
docker ps -a
ubuntu@ip-172-31-94-25:~$ docker ps
- [x] trivy version
ubuntu@ip-172-31-89-97:~$ trivy version
Version: 0.55.2
- [x] Helm version
ubuntu@ip-172-31-89-97:~$ helm version
version.BuildInfo{Version:"v3.16.1", GitCommit:"5a5449dc42be07001fd5771d56429132984ab3ab", GitTreeState:"clean", GoVersion:"go1.22.7"}
- [x] Terraform version
ubuntu@ip-172-31-89-97:~$ terraform version
Terraform v1.9.6
on linux_amd64
- [x] eksctl version
ubuntu@ip-172-31-89-97:~$ eksctl version
0.191.0
- [x] kubectl version
ubuntu@ip-172-31-89-97:~$ kubectl version
Client Version: v1.31.1
Kustomize Version: v5.4.2
- [x] aws cli version
ubuntu@ip-172-31-89-97:~$ aws version
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
- [x] Verify the EKS cluster
On the virtual machine, Go to the directory k8s_setup_file
and open the file cat apply.log
to verify whether the cluster is created or not.
ubuntu@ip-172-31-90-126:~/k8s_setup_file$ pwd
/home/ubuntu/k8s_setup_file
ubuntu@ip-172-31-90-126:~/k8s_setup_file$
After Terraform deploys the instance and the cluster is set up, you can SSH into the instance and run:
aws eks update-kubeconfig --name <cluster-name> --region
<region>
Once the EKS cluster is set up then need to run the following command to make it interact with EKS.
aws eks update-kubeconfig --name balraj-cluster --region us-east-1
The aws eks update-kubeconfig
command is used to configure your local kubectl tool to interact with an Amazon EKS (Elastic Kubernetes Service) cluster. It updates or creates a kubeconfig file that contains the necessary authentication information to allow kubectl to communicate with your specified EKS cluster.
What happens when you run this command:
The AWS CLI retrieves the required connection information for the EKS cluster (such as the API server endpoint and certificate) and updates the kubeconfig file located at ~/.kube/config (by default). It configures the authentication details needed to connect kubectl to your EKS cluster using IAM roles. After running this command, you will be able to interact with your EKS cluster using kubectl commands, such as kubectl get nodes
or kubectl get pods
.
kubectl get nodes
kubectl cluster-info
kubectl config get-contexts
Change the hostname: (optional)
sudo terraform show
sudo hostnamectl set-hostname jenkins-svr
sudo hostnamectl set-hostname jenkins-agent
- Update the /etc/hosts file:
- Open the file with a text editor, for example:
sudo vi /etc/hosts
Replace the old hostname with the new one where it appears in the file.
Apply the new hostname without rebooting:
sudo systemctl restart systemd-logind.service
Verify the change:
hostnamectl
Update the package
sudo -i
apt update
Setup the Jenkins
Access Jenkins via http://:8080. Retrieve the initial admin password using:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Setup the Jenkins agent
- Set the password for user "ubuntu" on both Jenkins Master and Agent machines.
sudo passwd ubuntu
- Need to do the password-less authentication between both servers.
sudo su
cat /etc/ssh/sshd_config | grep "PasswordAuthentication"
echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
cat /etc/ssh/sshd_config | grep "PasswordAuthentication"
cat /etc/ssh/sshd_config | grep "PermitRootLogin"
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
cat /etc/ssh/sshd_config | grep "PermitRootLogin"
- Restart the sshd reservices.
systemctl daemon-reload
or
sudo service ssh restart
- Generate the SSH key and share it with the agent.
ssh-keygen
Copy the public SSH key from Jenkins to Agent.
- Public key from Jenkins master.
ubuntu@ip-172-31-89-97:~$ cat ~/.ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG4BFDIh47LkE6huSzi6ryMKcw+Rj1+6ErnplFbOK5Nz ubuntu@ip-172-31-89-97
From Agent.
Now, try to do the SSH to the agent, and it should be connected without any credentials.
ssh ubuntu@<private IP address of agent VM>
Open Jenkins UI
and configure the agent.
Dashboard> Manage Jenkins> Nodes
Remote root directory: define the path where the folder would be created. Launch method: Launch agents via ssh
Host: public IP address of agent VM
Credential of the agent. (will create the credential)
Kind: SSH Username with private key
private key from Jenkins Master server.
Host Key Verification Strategy: Non-Verifying Verification Strategy
Congratulations: Agent is successfully configured and alive. :-)
Install the plugin in Jenkins
Manage Jenkins > Plugins view> Under the Available tab, plugins available for download from the configured Update Center can be searched and considered:
Blue Ocean
Pipeline: Stage View
Docker
Docker Pipeline
Kubernetes
Kubernetes CLI
OWASP Dependency-Check
SonarQube Scanner
Run any job and verify that job is executing on the agent node.
- create a below pipeline and build it and verify the outcomes in agent machine.
pipeline {
agent { label "balraj"}
stages {
stage('code') {
steps {
echo 'This is cloning the code'
git branch: 'main', url: 'https://github.com/mrbalraj007/django-notes-app.git'
echo "This is cloning the code"
}
}
}
}
Jenkins Shared Library
Shared libraries in Jenkins Pipelines are reusable pieces of code that can be organized into functions and classes.
These libraries allow you to encapsulate common logic, making it easier to maintain and share across multiple pipelines and projects.
Shared library must be inside the vars directory in your GitHub repository
Shared library uses Groovy syntax and the file name ends with a .groovy extension.
How to create and use shared library in Jenkins.
How to create a shared library
Login to your Jenkins dashboard. Jenkins Installation
Go to Manage Jenkins --> System and search for Global Trusted Pipeline Libraries.
-
Name: Shared
Default version: <branch name>
Project repository: https://github.com/mrbalraj007/Jenkins_SharedLib.git
How to use it in Jenkins pipeline
Go to your declarative pipeline
Add @Library('Shared') _ at the very first line of your jenkins pipeline.
Note: @Library() _ is the syntax to use shared library.
Configure SonarQube
<public IP address: 9000>
default login: admin/admin
You have to change the password as per the below screenshot
Configure email:
Open a Jenkins UI and go to Dashboard> Manage Jenkins> Credentials> System> Global credentials (unrestricted)
Configure email notification
Dashboard> Manage Jenkins> System Search for "Extended E-mail Notification
"
Open Gmail ID and look for the notification email:
Configure OWASP:
Dashboard Manage Jenkins Tools
Search for Dependency-Check installations
Integrate SonarQube in Jenkins.
Go to Sonarqube and generate the token
Administration> Security> users>
now, open Jenkins UI and create a credential for Sonarqube
Dashboard> Manage Jenkins> Credentials> System> Global credentials (unrestricted)
Configure the Sonarqube scanner in Jenkins.
Dashboard> Manage Jenkins> Tools
Search for SonarQube Scanner installations
Configure the Github in Jenkins.
First, generate the token in GitHub and configure it in Jenkins
Now, open Jenkins UI
Dashboard> Manage Jenkins> Credentials> System> Global credentials (unrestricted)
Configure the Sonarqube server in Jenkins.
On Jenkins UI:
Dashboard> Manage Jenkins> System > Search for
SonarQube installations
Now, we will confirm the webhook
in Sonarqube Open SonarQube UI:
Generate docker Token and update in Jenkins.
Dashboard> Manage Jenkins> Credentials> System> Global credentials (unrestricted)
- Configure the docker
Name- docker [x] install automatically docker version: latest
Set docker cred in Jenkins
- Dashboard>Manage Jenkins > Credentials> System> Global credentials (unrestricted) ⇒ Click on "New credentials"
kind: "username with password" username: your docker login ID password: docker token Id: docker-cred #it would be used in pipeline Description:docker-cred
Configure the ArgoCD.
- Get argocd namespace
kubectl get namespace
- Get the argocd pods
kubectl get pods -n argocd
- Check argocd services
kubectl get svc -n argocd
Change argocd server's service from ClusterIP to NodePort
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
kubectl get svc -n argocd
Now, try to access ArgoCd in the browser. :
Note: I was not able to access argocd in the browser and noticed that the port was not allowed. You need to select any of the EKS cluster nodes and go to the security group Select the SG "sg-0838bf9c407b4b3e4" (You need to select yours) and allow the all port range.
Now, try to access ArgoCd in the browser.
https://<IP address>:31230
https://44.192.109.76:31230/
The default login would be admin
.
- To get the initial password of the argocd server
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Update the password for argocd
Configure the repositories in Argocd
For CI Pipeline
Update this Jenkins file as per your requirement.
Go to folder Wanderlust-Mega-Project
and copy the Jenkins pipeline from the git repo and build a pipeline named as Wanderlust-CI
.
Make sure, you will change the following details before changing it.
- label
- git repo
- Docker image tag
Complete pipeline:
@Library('Shared') _
pipeline {
agent {label 'Balraj'}
environment{
SONAR_HOME = tool "Sonar"
}
parameters {
string(name: 'FRONTEND_DOCKER_TAG', defaultValue: '', description: 'Setting docker image for latest push')
string(name: 'BACKEND_DOCKER_TAG', defaultValue: '', description: 'Setting docker image for latest push')
}
stages {
stage("Validate Parameters") {
steps {
script {
if (params.FRONTEND_DOCKER_TAG == '' || params.BACKEND_DOCKER_TAG == '') {
error("FRONTEND_DOCKER_TAG and BACKEND_DOCKER_TAG must be provided.")
}
}
}
}
stage("Workspace cleanup"){
steps{
script{
cleanWs()
}
}
}
stage('Git: Code Checkout') {
steps {
script{
code_checkout("https://github.com/mrbalraj007/Wanderlust-Mega-Project.git","main")
}
}
}
stage("Trivy: Filesystem scan"){
steps{
script{
trivy_scan()
}
}
}
stage("OWASP: Dependency check"){
steps{
script{
owasp_dependency()
}
}
}
stage("SonarQube: Code Analysis"){
steps{
script{
sonarqube_analysis("Sonar","wanderlust","wanderlust")
}
}
}
stage("SonarQube: Code Quality Gates"){
steps{
script{
sonarqube_code_quality()
}
}
}
stage('Exporting environment variables') {
parallel{
stage("Backend env setup"){
steps {
script{
dir("Automations"){
sh "bash updatebackendnew.sh"
}
}
}
}
stage("Frontend env setup"){
steps {
script{
dir("Automations"){
sh "bash updatefrontendnew.sh"
}
}
}
}
}
}
stage("Docker: Build Images"){
steps{
script{
dir('backend'){
docker_build("wanderlust-backend-beta","${params.BACKEND_DOCKER_TAG}","balrajsi")
}
dir('frontend'){
docker_build("wanderlust-frontend-beta","${params.FRONTEND_DOCKER_TAG}","balrajsi")
}
}
}
}
stage("Docker: Push to DockerHub"){
steps{
script{
docker_push("wanderlust-backend-beta","${params.BACKEND_DOCKER_TAG}","balrajsi")
docker_push("wanderlust-frontend-beta","${params.FRONTEND_DOCKER_TAG}","balrajsi")
}
}
}
}
post{
success{
archiveArtifacts artifacts: '*.xml', followSymlinks: false
build job: "Wanderlust-CD", parameters: [
string(name: 'FRONTEND_DOCKER_TAG', value: "${params.FRONTEND_DOCKER_TAG}"),
string(name: 'BACKEND_DOCKER_TAG', value: "${params.BACKEND_DOCKER_TAG}")
]
}
}
}
Update Instnace ID
in Automations folder
Go to folder Automations
and update the instance ID in both bash scripts. Instance ID would be EC2 EKS instance.
Automations/updatebackendnew.sh
Automations/updatefrontendnew.sh
Troubleshooting while running CI Pipeline
I was getting an error while running the CI job first time due to missing required environment variables: FRONTEND_DOCKER_TAG
and BACKEND_DOCKER_TAG
.
Steps to Fix Ensure Required Parameters Are Provided:
[!Important] First time, when you run the pipeline, then the pipeline will fail because the parameter is not given. Try a second time and pass the parameter.
When I ran it again and received an error message saying that Trivy was not found, I realized that Trivy was not installed on the Jenkins agent machine. I updated the Terraform script, and now the pipeline should work.
Now, I got the below error message "permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?". I have updated the Terraform script.
Solution:
sudo usermod -aG docker $USER && newgrp docker
But I was still getting the same error message to fix the issue.
"permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?"
Solution:
sudo systemctl restart jenkins
For CD Pipeline
Go to folder Gitops
and copy the Jenkins pipeline from the git repo and build a pipeline named as Wanderlust-CD
.
make sure, you will change the following details before changing it.
- git repo
- email adddress
Complete pipeline
@Library('Shared') _
pipeline {
agent {label 'Balraj'}
parameters {
string(name: 'FRONTEND_DOCKER_TAG', defaultValue: '', description: 'Frontend Docker tag of the image built by the CI job')
string(name: 'BACKEND_DOCKER_TAG', defaultValue: '', description: 'Backend Docker tag of the image built by the CI job')
}
stages {
stage("Workspace cleanup"){
steps{
script{
cleanWs()
}
}
}
stage('Git: Code Checkout') {
steps {
script{
code_checkout("https://github.com/mrbalraj007/Wanderlust-Mega-Project.git","main")
}
}
}
stage('Verify: Docker Image Tags') {
steps {
script{
echo "FRONTEND_DOCKER_TAG: ${params.FRONTEND_DOCKER_TAG}"
echo "BACKEND_DOCKER_TAG: ${params.BACKEND_DOCKER_TAG}"
}
}
}
stage("Update: Kubernetes manifests"){
steps{
script{
dir('kubernetes'){
sh """
sed -i -e s/wanderlust-backend-beta.*/wanderlust-backend-beta:${params.BACKEND_DOCKER_TAG}/g backend.yaml
"""
}
dir('kubernetes'){
sh """
sed -i -e s/wanderlust-frontend-beta.*/wanderlust-frontend-beta:${params.FRONTEND_DOCKER_TAG}/g frontend.yaml
"""
}
}
}
}
stage("Git: Code update and push to GitHub"){
steps{
script{
withCredentials([gitUsernamePassword(credentialsId: 'Github-cred', gitToolName: 'Default')]) {
sh '''
echo "Checking repository status: "
git status
echo "Adding changes to git: "
git add .
echo "Commiting changes: "
git commit -m "Updated environment variables"
echo "Pushing changes to github: "
git push https://github.com/mrbalraj007/Wanderlust-Mega-Project.git main
'''
}
}
}
}
}
post {
success {
script {
emailext attachLog: true,
from: 'raj10ace@gmail.com',
subject: "Wanderlust Application has been updated and deployed - '${currentBuild.result}'",
body: """
<html>
<body>
<div style="background-color: #FFA07A; padding: 10px; margin-bottom: 10px;">
<p style="color: black; font-weight: bold;">Project: ${env.JOB_NAME}</p>
</div>
<div style="background-color: #90EE90; padding: 10px; margin-bottom: 10px;">
<p style="color: black; font-weight: bold;">Build Number: ${env.BUILD_NUMBER}</p>
</div>
<div style="background-color: #87CEEB; padding: 10px; margin-bottom: 10px;">
<p style="color: black; font-weight: bold;">URL: ${env.BUILD_URL}</p>
</div>
</body>
</html>
""",
to: 'raj10ace@gmail.com',
mimeType: 'text/html'
}
}
failure {
script {
emailext attachLog: true,
from: 'raj10ace@gmail.com',
subject: "Wanderlust Application build failed - '${currentBuild.result}'",
body: """
<html>
<body>
<div style="background-color: #FFA07A; padding: 10px; margin-bottom: 10px;">
<p style="color: black; font-weight: bold;">Project: ${env.JOB_NAME}</p>
</div>
<div style="background-color: #90EE90; padding: 10px; margin-bottom: 10px;">
<p style="color: black; font-weight: bold;">Build Number: ${env.BUILD_NUMBER}</p>
</div>
</body>
</html>
""",
to: 'raj10ace@gmail.com',
mimeType: 'text/html'
}
}
}
}
Now, run the Wanderlust-CI
pipeline
When you run the next pipeline, it will prompt you to provide the tag version it should supply, in the form of “v5”.
Got an email for successful deployment
Connect wonderlast cluster
to ArgoCD.
Now, we will connect(create) the cluster to ArgoCD.
on Jenkins Master Node, run the following command
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-1-239.ec2.internal Ready <none> 3h48m v1.30.4-eks-a737599
ip-10-0-2-128.ec2.internal Ready <none> 3h47m v1.30.4-eks-a737599
ip-10-0-2-92.ec2.internal Ready <none> 3h48m v1.30.4-eks-a737599
ubuntu@ip-172-31-95-57:~$
ArgoCD CLI login
argocd login argocd URL:port --username admin
- in my lab.
argocd login 44.192.109.76:31230 --username admin
will prompt for yes/No, type y and supply the password for argocd.
- now, we will check how many clusters have in argocd.
argocd cluster list
- To get the Wonderlust cluster name
kubectl config get-contexts
ubuntu@ip-172-31-95-57:~$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* arn:aws:eks:us-east-1:373160674113:cluster/balraj-cluster arn:aws:eks:us-east-1:373160674113:cluster/balraj-cluster arn:aws:eks:us-east-1:373160674113:cluster/balraj-cluster
ubuntu@ip-172-31-95-57:~$
- To add the wanderlust cluster name into argocd
argocd cluster add <your existing cluster name> --name <new cluster name>
argocd cluster add arn:aws:eks:us-east-1:373160674113:cluster/balraj-cluster --name wonderlust-eks-cluster
It will ask you to type Yes/No... type y
.
ubuntu@ip-172-31-95-57:~$ argocd cluster add arn:aws:eks:us-east-1:373160674113:cluster/balraj-cluster --name wonderlust-eks-cluster
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `arn:aws:eks:us-east-1:373160674113:cluster/balraj-cluster` with full cluster level privileges. Do you want to continue [y/N]? y
INFO[0010] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0010] ClusterRole "argocd-manager-role" created
INFO[0010] ClusterRoleBinding "argocd-manager-role-binding" created
INFO[0015] Created bearer token secret for ServiceAccount "argocd-manager"
Cluster 'https://9B7F2E2AB5BAFB3C44524B0AEA69BA1E.gr7.us-east-1.eks.amazonaws.com' added
ubuntu@ip-172-31-95-57:~$
it will create a namespace, roles (RBAC), service and token.
- Now, check how many clusters is showing
ubuntu@ip-172-31-95-57:~$ argocd cluster list
SERVER NAME VERSION STATUS MESSAGE PROJECT
https://9B7F2E2AB5BAFB3C44524B0AEA69BA1E.gr7.us-east-1.eks.amazonaws.com wonderlust-eks-cluster Unknown Cluster has no applications and is not being monitored.
https://kubernetes.default.svc in-cluster Unknown Cluster has no applications and is not being monitored.
ubuntu@ip-172-31-95-57:~$
Now, go to argocd
UI and refresh the page and you will see two clusters.
Deploy application through argocd.
- Now, we will add the application first.
Health of the application
Verify application.
- Now, time to access the application
<worker-public-ip>:31000
Congratulations! :-) You have deployed the application successfully.
Status in Sonarqube (Code quality and detect potential vulnerabilities)
Image status in DockerHub
Configure observability (Monitoring)
List all Kubernetes pods in all namespaces:
kubectl get pods -A
- To get the existing namespace
kubectl get namespace
To get the namespace
kubectl get ns
ubuntu@ip-172-31-95-57:~$ kubectl get ns
NAME STATUS AGE
argocd Active 4h34m
default Active 4h39m
kube-node-lease Active 4h39m
kube-public Active 4h39m
kube-system Active 4h39m
kubernetes-dashboard Active 4h34m
prometheus Active 4h34m
wanderlust Active 22m
ubuntu@ip-172-31-95-57:~$
To get pods in Prometheus
kubectl get pods -n prometheus
kubectl get pods -n prometheus
NAME READY STATUS RESTARTS AGE
alertmanager-stable-kube-prometheus-sta-alertmanager-0 2/2 Running 0 4h35m
prometheus-stable-kube-prometheus-sta-prometheus-0 2/2 Running 0 4h35m
stable-grafana-86b6cdc46c-76wt5 3/3 Running 0 4h35m
stable-kube-prometheus-sta-operator-58fc7ddb6b-clcqq 1/1 Running 0 4h35m
stable-kube-state-metrics-b65996c8d-fnvqs 1/1 Running 0 4h35m
stable-prometheus-node-exporter-pjrwr 1/1 Running 0 4h35m
stable-prometheus-node-exporter-w44sw 1/1 Running 0 4h35m
stable-prometheus-node-exporter-wpkkm 1/1 Running 0 4h35m
To get service in Prometheus
kubectl get svc -n prometheus
Expose Prometheus and Grafana to the external world through Node Port
[!Important] change it from Cluster IP to NodePort after changing make sure you save the file and open the assigned nodeport to the service.
- For Prometheus
kubectl patch svc stable-kube-prometheus-sta-prometheus -n prometheus -p '{"spec": {"type": "NodePort"}}'
kubectl get svc -n prometheus
- For Grafana
kubectl patch svc stable-grafana -n prometheus -p '{"spec": {"type": "NodePort"}}'
kubectl get svc -n prometheus
Verify Prometheus and Grafana accessibility
<worker-public-ip>:31205 # Prometheus <br>
<worker-public-ip>:32242 # Grafana
Note- (always check-in kubectl get svc -n prometheus
, it is running on which port)
http://44.192.109.76:31205/graph
Note--> To get the login password for grafana, you need to run the following command
kubectl get secret --namespace prometheus stable-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
[!Note] Default user login name is admin
Dashboard:
Email Notification for successful deployment:
Resources used in AWS:
EC2 instances
EKS Cluster
Environment Cleanup:
As we are using Terraform, we will use the following command to delete
EKS cluster
firstthen delete the
virtual machine
.
To delete AWS EKS cluster
- Login into the Jenkins Master EC2 instance change the directory to /k8s_setup_file, and run the following command to delete the cluster.
cd /k8s_setup_file
sudo terraform destroy --auto-approve
Now, time to delete the Virtual machine
.
Go to folder "13.Real-Time-DevOps-Project/Terraform_Code/Code_IAC_Terraform_box" and run the Terraform command.
cd Terraform_Code/
$ ls -l
Mode LastWriteTime Length Name
---- ------------- ------ ----
da---l 26/09/24 9:48 AM Code_IAC_Terraform_box
Terraform destroy --auto-approve
Key Takeaways
Automated pipelines: This project will help you understand how to build a fully automated CI/CD pipeline from code to deployment.
Security Integration: The importance of embedding security tools like OWASP and Trivy in the DevOps pipeline ensures secure code delivery.
Real-world implementation: You’ll gain hands-on experience using modern tools in a real-world cloud environment.
What to Avoid
Skipping security checks: Security is a core part of DevSecOps. Ignoring dependency checks or filesystem scans can lead to vulnerabilities in production.
Improper resource management: In AWS EKS, over-provisioning resources can lead to unnecessary costs. Make sure to properly configure autoscaling and resource limits.
Manual interventions: Automating processes like testing, scanning, and deployments are key in DevSecOps. Manual steps can introduce errors or delays.
Key Benefits
Improved security: Using DevSecOps practices ensures that security is considered from the beginning, not as an afterthought.
Faster delivery: With CI/CD tools like Jenkins and ArgoCD, you can deliver software updates and features much faster.
Scalability: AWS EKS allows you to easily scale your Kubernetes clusters based on demand, ensuring high availability.
Conclusion
By following these steps and best practices, you can efficiently set up a CI/CD pipeline that enhances your deployment processes and streamlines your workflow.
Following these steps, you can successfully deploy and manage a Kubernetes application using Jenkins. Automating this process with Jenkins pipelines ensures consistent and reliable deployments. If you found this guide helpful, please like and subscribe to my blog for more content. Feel free to reach out if you have any questions or need further assistance!
Ref Link
Subscribe to my newsletter
Read articles from Balraj Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by