DevSecOps CI/CD Pipeline Implementation Tiktactoe Game


1. What is DevSecOps?
DevSecOps is essentially DevOps with a security mindset. While DevOps focuses on development and operations, DevSecOps integrates security at every step of the process. It’s not just about securing CI/CD pipelines but includes any task where security concerns are addressed during DevOps processes, such as:
Infrastructure as Code (IaC): Ensuring security within your Terraform, Ansible, and Kubernetes configurations.
CI/CD Pipelines: Securing the pipeline itself, ensuring security checks are implemented to catch vulnerabilities, such as outdated packages or hardcoded secrets.
2. Why is DevSecOps Gaining Significance?
DevSecOps has gained traction due to two primary reasons:
a. Growing Use of AI Assistance
Many developers use AI assistants to write code. However, AI might generate code that:
Hardcodes secrets like API tokens.
Uses old versions of packages that may contain critical vulnerabilities.
Integrates vulnerable packages, opening up security risks.
Without DevSecOps, these risks can go undetected. DevSecOps pipelines catch these issues by checking for hardcoded secrets or outdated packages.
b. Cybersecurity Risks
Developers sometimes use outdated versions of packages (like Log4j) without considering known vulnerabilities. Additionally, developers might hardcode sensitive information, like API tokens, which can later be exposed if not properly managed. With DevSecOps, security measures such as scanning for vulnerabilities, auditing code for secrets, and other security checks are added to CI/CD pipelines.
3. Setting Up a DevSecOps Pipeline for a TypeScript Application
Let’s go through the process of setting up a DevSecOps pipeline for a TypeScript application that uses Vite and npm as package managers.
a. Cloning the GitHub Repository
A public GitHub repository is provided for the TypeScript application.
The repository includes:
Complete source code for a TicTacToe application.
README file explaining the project structure, dependencies, and setup instructions.
A CI/CD YAML file for the DevSecOps pipeline.
To clone the repository:
git clone <repository_url>
cd DevSecOps-demo
b. Installing Dependencies
Once you’ve cloned the repository, install dependencies using npm:
npm install
- This will download all dependencies from the
package.json
file.
c. Running the Application Locally
To run the application:
- First, build the application using the following command:
npm run build
This will run vite build
, which bundles the TypeScript code and creates static assets in the dist/
folder.
- Then, start the development server:
npm run dev
- This starts the Vite development server, and the application will be accessible on
localhost:5137
.
4. Dockerizing the Application
Before implementing the DevSecOps pipeline, understand how to containerize the application using Docker.
a. Writing the Dockerfile
To Dockerize the TypeScript application, we need a multi-stage Dockerfile.
Stage 1 (Build):
Use the Node.js image to install dependencies and build the application.
The application is built using the
npm run build
command, and the output is stored in thedist/
folder.
Stage 2 (Production):
- Use an nginx image to serve the static assets (content in the
dist/
folder).
- Use an nginx image to serve the static assets (content in the
Here's the Dockerfile:
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm install or RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
# Add nginx configuration if needed
# COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
b. Building and Running the Docker Image
- Build the Docker image:
docker build -t tictactoe-demo:v1 .
- Run the Docker container:
docker run -d -p 9099:80 tictactoe-demo:v1
- Access the application on
localhost:9099
5. DevSecOps Pipeline (CI/CD Pipeline)
Now that you understand how to run the application locally and containerize it, let’s move on to setting up the DevSecOps pipeline.
a. Structure of DevSecOps Pipeline
A DevSecOps pipeline typically consists of several stages or jobs. Here's a general flow:
Code Commit / Pull Request:
- When a developer pushes a commit or opens a pull request in the GitHub repository, the pipeline is triggered.
Static Code Analysis:
Analyze the code for:
Unused variables.
Deprecated or old package versions.
Hardcoded secrets.
Unit Tests:
- Run unit tests to ensure the code works as expected.
Build:
- Build the application (e.g., using Vite for a TypeScript app).
Security Scans:
Check for known vulnerabilities in dependencies.
Ensure no sensitive data (e.g., API keys) is hardcoded.
Docker Image Build:
- Build the Docker image for the application.
Deploy:
- Deploy the application to the environment (e.g., staging, production).
b. Example GitHub Actions YAML for DevSecOps Pipeline
name: CI/CD Pipeline
on:
push:
branches: [ main ]
paths-ignore:
- 'kubernetes/deployment.yaml' # Ignore changes to this file to prevent loops
pull_request:
branches: [ main ]
jobs:
test:
name: Unit Testing
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test || echo "No tests found, would add tests in a real project"
lint:
name: Static Code Analysis
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm' #downloading cache for fast
- name: Install dependencies
run: npm ci
- name: Run ESLint
run: npm run lint
build:
name: Build
runs-on: ubuntu-latest
needs: [test, lint]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build project
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build-artifacts
path: dist/
docker:
name: Docker Build and Push
runs-on: ubuntu-latest
needs: [build]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
outputs:
image_tag: ${{ steps.set_output.outputs.image_tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: build-artifacts
path: dist/
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.TOKEN }}
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,format=long
type=ref,event=branch
latest
- name: Build Docker image
uses: docker/build-push-action@v5
with:
context: .
push: false
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
load: true
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:sha-${{ github.sha }}
format: 'table'
exit-code: '1'
ignore-unfixed: true
vuln-type: 'os,library'
severity: 'CRITICAL,HIGH'
- name: Push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Set image tag output
id: set_output
run: echo "image_tag=$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT
update-k8s:
name: Update Kubernetes Deployment
runs-on: ubuntu-latest
needs: [docker]
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
token: ${{ secrets.TOKEN }}
- name: Setup Git config
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
- name: Update Kubernetes deployment file
env:
IMAGE_TAG: sha-${{ github.sha }}
GITHUB_REPOSITORY: ${{ github.repository }}
REGISTRY: ghcr.io
run: |
# Define the new image with tag
NEW_IMAGE="${REGISTRY}/${GITHUB_REPOSITORY}:${IMAGE_TAG}"
# Update the deployment file directly
sed -i "s|image: ${REGISTRY}/.*|image: ${NEW_IMAGE}|g" kubernetes/deployment.yaml
# Verify the change
echo "Updated deployment to use image: ${NEW_IMAGE}"
grep -A 1 "image:" kubernetes/deployment.yaml
- name: Commit and push changes
run: |
git add kubernetes/deployment.yaml
git commit -m "Update Kubernetes deployment with new image tag: ${{ needs.docker.outputs.image_tag }} [skip ci]" || echo "No changes to commit"
git push
c. Explanation of the Pipeline
Static Analysis Job: Runs
npm run lint
to check for code quality issues (e.g., unused variables).Unit Tests Job: Runs the unit tests after the code passes static analysis.
Docker Image Build Job: Builds the Docker image for the application.
Security Scan Job: Runs e Docker image to check for vulnerabilities.
Deploy Job: Deploys the application to a staging environment.
Docker Stage & Image Management in the CI/CD Pipeline
Once the initial build is successful, the next stage in the pipeline is the Docker stage. This involves several steps:
Building the Docker Image:
- In the Docker stage, the pipeline builds the Docker image for the application.
Image Scanning:
- The built Docker image is scanned using security tools (e.g., 3B) to identify potential vulnerabilities.
Pushing the Image:
- The Docker image is then pushed to a container registry. In this case, we're using GitHub Container Registry instead of Docker Hub for security reasons, as many organizations prefer using private registries for DevSecOps purposes.
Steps to push Image to github container registry
- Generate a Personal Access token in github
Here we got our personal access token which is API Token.
- Add this Personal Access tokeninto your github repo settings
Here make sure you name your secret name with “TOKEN” same as in your cicd.yml workflow. We have successfully added the PAS which is API Token into our Github Repo Settings (Note: it’s not github settings).
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.TOKEN }}
Now we are able to login to github container registry.
Defining own set of tags in docker image for more production specific
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,format=long
type=ref,event=branch
latest
Using our specified tags from above
type=ref,event=branch latest (skip this if you don’t want multiple tags)
- name: Build Docker image
uses: docker/build-push-action@v5
with:
context: .
push: false
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
load: true
Everytime a image will be generated the tags will be unique
Updating Kubernetes Manifest
Once the Docker image is created and pushed to the container registry, the next step is to update the Kubernetes manifest files. Specifically:
The deployment.yaml file in the Kubernetes folder is updated with the new Docker image tag.
This process is automated via a shell script, ensuring that the deployment file is updated whenever a new Docker image is pushed.
Continuous Integration (CI) and Continuous Deployment (CD)
CI Pipeline: After the CI pipeline completes, the updated image tag is pushed to the repository.
CD Pipeline: The ROCD tool is used to continuously monitor for changes in the image tag or deployment files.
- ROCD triggers a deployment to the Kubernetes cluster whenever it detects an updated image tag or changes in the Kubernetes manifests.
GitHub Actions Workflow Structure
The process is managed by a GitHub Actions Workflow, which consists of multiple jobs:
Unit Testing Job
Static Code Analysis Job
Docker Build Job
Image Tag Update Job
Each job may consist of multiple steps. For example:
The Docker job includes:
Building the Docker image
Scanning the image for vulnerabilities
Pushing the image to GitHub Container Registry
The CD pipeline, on the other hand, uses ROCD to detect changes and deploy new images to Kubernetes.
Writing a GitHub Workflow (YAML) File
Writing a GitHub workflow file might seem daunting at first, but it’s very manageable if you break it down. Here’s a simple guide to get started:
Naming the Workflow:
- The first part of the GitHub workflow is naming the workflow. For example, you might call it “CI/CD Pipeline” or “DevSecOps Pipeline”.
Setting the Trigger (
on
Field):The
on
field defines when the workflow should trigger.You can set it to trigger on push or pull_request events.
For example, you might trigger it on push to the
main
branch or when a pull request is created.
You can also define conditions for ignoring specific files (like
README.md
ordeployment.yaml
), preventing unnecessary builds when those files are updated.
Defining Jobs:
GitHub Actions workflows are structured around jobs.
For instance, you might have jobs like:
Unit testing: Runs unit tests using a specific node version.
Static code analysis: Runs security scans or code quality checks.
Docker image build: Builds, scans, and pushes Docker images.
Kubernetes deployment: Triggers deployments to Kubernetes once the Docker image is updated.
Using Actions:
GitHub Actions provides reusable actions (plugins) to avoid reinventing the wheel. These actions handle various tasks such as checking out code, setting up Node.js, or caching dependencies.
You can use actions like:
actions/checkout@v2
for checking out the repository.actions/setup-node@v2
for setting up Node.js.actions/cache@v2
for caching dependencies to speed up subsequent builds.
Do some change in code and see your pipeline running
You can see the tag updated in your deployment file.
Now to verify that proper tag has been updated with the image, we will run this locally with help of docker.
Running the Docker Image on an EC2 Instance or Local:
- You demonstrated how to pull the image from GHCR, run it on an EC2 instance
docker run -d -p 1010:80 ghcr.io/amitsinghs98/devsecops-tictactoe-ga:sha-e0bc0e3f791cd458d84ba1158c10cbb9907e1e31
Don’t forget to login into your ghcr as it’s private repo
Argo CD Setup:
- Installed Kind to create a local Kubernetes cluster on EC2, and then installed Argo CD.
- Create a namespace argocd
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- Configured Argo CD to monitor the repository and deploy the application.
kubectl get pods -n argocd -w
- We have used -
w flag
to see realtime pod creation inside argocd
- Check the service of your argocd
kubectl get svc -n argocd
- Now port-forward for argocd-server
kubectl port-forward svc/argocd-server 9001:80 -n argocd --address 0.0.0.0 &
- Get your password for argo cd login
kubectl get secrets -n argocd
kubectl edit secret argocd-initial-admin-secret -n argocd
echo YjdsWTVYeeVURTRTMwdjc5Mg== | base64 --decode #copypassword
First specify your imagePullSecrets: config for config your ghcr private repo
kubectl create secret docker-registry github-container-registry \
EMAIL> --docker-server=ghcr.io \
> --docker-username=YOUR_GITHUB_USERNAME \
> --docker-password=YOUR_GITHUB_TOKEN \
> --docker-email=amitsinghs2798@gmail.com
Try to change in your github repo, and see how it triggers and complete.
Go to terminal:
kubectl get svc
kubectl get pod
DONE
Subscribe to my newsletter
Read articles from Amit singh deora directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
