FinOps- Reduced Cloud cost for CI/CD Pipeline Logs

Ankit KundalaAnkit Kundala
4 min read

🔴Problem Statement :

Client was using ELK stack to store all the Logs files of the Application as in Application Microservices log(100s of services running), Kubernetes control plane logs, and Infrastructure Log files. Infrastructed logs consisted mostly of Jenkins and this Jenkins had most log files comming for the CI/CD pipeline commits( approx 500 commits/perday) that too from the lower UAT environments and Staging environment and they were other enviroments like Pre-Prod and Production. This UAT and Staging Environment logs , their Build failures and other errors were sent through gmail and Slack and get fixed too and the team would rarely do log analysis in this Jenkins log-files. They were holding storing this log-files in ELK stack just as a Backup. As ELK here is not so needed as the failures would already sent through gmail and slack and would get fixed.

💡Solution :

So, the Focus was to shift the UAT and Staging envirnoment logs to S3 Buckets to reduce the cost of ELK stack . For more cost-reduction Log files can be stored in AWS Glacier and Deep archieve.

At the end of the day,the Shell script will run manually or automatically through cron jobs and trigger the Jenkins, this will pull all the Jenkins log-files and store in S3 Buckets. If we have number Pipelines running this Shell script would loop in to each Pipeline and collect data and store in S3 Buckets.

As this would result in approx 50% cost-reduction by Simply using the Below Shell Script.

Step 1 : Install AWS CLI and Configure it

Connect the EC2 to the terminal and follow the Steps Accordingly.

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws cli

To check wheter the aws cli has been installed, if yes then you’ll get all the commands of cli and its successfully installed.

aws configure

Here you’ll be asked for AWS Access Key ID, AWS Secret Access Key and Region. You can Create a new user for the particular project or just can continue with Access Token. IAM > Dashboard > Create access Token and Connect

Step 2 : Install Jenkins and Java

sudo apt update
sudo apt install openjdk-17-jre

Verify Java is Installed by running the following command

java -version
curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee \
  /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
  https://pkg.jenkins.io/debian binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins

After running this Jenkins will be installed on the system. Open the port 8080 from the Security Inbound Rules and run the [ public-ip-address/8080 ] you’ll get to see Jenkins Log in page.

Step 3 : Write Script

Create a new [optimization.sh] file and Copy the script , make sure Replace with your S3 bucket name

#!/bin/bash

# Variables
JENKINS_HOME="/var/lib/jenkins"  # Replace with your Jenkins home directory
S3_BUCKET="s3://jenkins-faclon"  # Replace with your S3 bucket name
DATE=$(date +%Y-%m-%d)  # Today's date

# Check if AWS CLI is installed
if ! command -v aws &> /dev/null; then
    echo "AWS CLI is not installed. Please install it to proceed."
    exit 1
fi

# Iterate through all job directories
for job_dir in "$JENKINS_HOME/jobs/"*/; do
    job_name=$(basename "$job_dir")

    # Iterate through build directories for the job
    for build_dir in "$job_dir/builds/"*/; do
        # Get build number and log file path
        build_number=$(basename "$build_dir")
        log_file="$build_dir/log"

        # Check if log file exists and was created today
        if [ -f "$log_file" ] && [ "$(date -r "$log_file" +%Y-%m-%d)" == "$DATE" ]; then
            # Upload log file to S3 with the build number as the filename
            aws s3 cp "$log_file" "$S3_BUCKET/$job_name-$build_number.log" --only-show-errors

            if [ $? -eq 0 ]; then
                echo "Uploaded: $job_name/$build_number to $S3_BUCKET/$job_name-$build_number.log"
            else
                echo "Failed to upload: $job_name/$build_number"
            fi
        fi
    done
done

After this give needed permission to the [optimizationn.sh] .

chmod 777 optimization.sh

Step 4 : Run CI/CD Pipeline and Create Log-file

For now I have used a dummy pipeline for testing purpose .

pipeline {
    agent any
    options {
        // Timeout counter starts AFTER agent is allocated
        timeout(time: 1, unit: 'SECONDS')
    }
    stages {
        stage('Stage 1') {
            steps {
                echo 'Hello World !!!!'
            }
        }
        stage('Stage 2') {
            steps {
                echo 'Script Running !'
            }
        }
    }
}

Run this pipe and log-file would be created.

Step 5 : Create a New Bucket in AWS

Amazon S3 > Buckets > Create bucket \> Jenkins-Falcon

Step 5 : Run this Script

./optimazation.sh

After running the script you’ll get to see files uploaded to bucket. Open AWS S3 Buckets and Check you’ll find the log-files

We’ve successfully transferred the Log-files to AWS we can now store this files in the S3 Glacier Deep Archive

Reference :

All thanks to Abhishek Veermala for this Project !

Project Link - Youtube-Video-Link

Github Link - Project-Repo-Link

0
Subscribe to my newsletter

Read articles from Ankit Kundala directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ankit Kundala
Ankit Kundala

Exploring the World of DevOps through various Different Projects. Let's connect and grow together!