Advanced DevSecOps: End-to-End Three-Tier Kubernetes on AKS with FluxCD, Grafana & Prometheus

Kunal MauryaKunal Maurya
16 min read

Introduction

This project document details the deployment of a secure, scalable, and observable three-tier application on Azure Kubernetes Service (AKS). The implementation utilizes Terraform for infrastructure provisioning, HashiCorp Vault for secrets management, GitLab CI/CD with Trivy and SonarQube for secure pipelines, FluxCD with Kustomize for GitOps deployments, and Prometheus with Grafana for monitoring.

GitHub repository link : https://github.com/KUNAL-MAURYA1470/End-to-End-Azure-DevSecOps-Project

Project Hierarchy

Component Highlights:

Breakdown of the architecture into key components:

🏗️ Infrastructure Layer

  • Azure AKS: Managed Kubernetes cluster that hosts the three-tier application.

  • Terraform: Automates provisioning of infrastructure including GitLab VM, Vault VM, and AKS.

🔐 Secrets Management

  • HashiCorp Vault: Securely stores sensitive credentials like Azure client ID and secret, used by Terraform and FluxCD.

🚀 CI/CD Pipeline

  • GitLab: Hosts source code and manages CI/CD pipelines for backend and frontend.

  • Self-hosted GitLab Runner: Executes CI/CD jobs on a dedicated VM for better control and performance.

🛡️ DevSecOps Integrations

  • npm: Installs project dependencies during the build stage.

  • SonarQube: Performs static code analysis to detect bugs, code smells, and vulnerabilities.

  • Trivy: Scans files and container images for vulnerabilities and misconfigurations.

  • Docker: Builds container images for backend and frontend services.

  • OWASP Dependency-Check: Scans project dependencies for known vulnerabilities.

📦 Container Registry

  • Azure ACR: Stores Docker images built during the CI process, used for deployment to AKS.

📊 Monitoring & Observability

  • Prometheus: Collects metrics from Kubernetes workloads and infrastructure.

  • Grafana: Visualizes metrics and sets up alerts for system health and performance.

🔁 GitOps Deployment

  • FluxCD: Automates deployment of Kubernetes manifests from Git repositories to AKS.

  • Kustomize: Manages environment-specific configurations for manifests.

Workflow:

Step 1: Create a GitLab VM on Azure using Terraform, along with necessary networking resources.
Step 2: Deploy the GitLab infrastructure using a GitLab CI pipeline.
Step 3: Set up a HashiCorp Vault VM using Terraform to manage secrets securely.
Step 4: Deploy the Vault infrastructure using GitLab CI.
Step 5: Configure Vault by creating roles, policies, and storing Azure AKS credentials.
Step 6: Write Terraform scripts to provision the Azure AKS cluster.
Step 7: Deploy the AKS cluster using GitLab CI.
Step 8: Start working on the backend source code.
Step 9: Create a Dockerfile to containerize the backend application.
Step 10: Set up a GitLab Runner for the backend repo on the self-hosted GitLab VM.
Step 11: Generate Azure ACR credentials for pushing backend images.
Step 12: Add required environment variables to the backend repository.
Step 13: Create a GitLab CI pipeline for backend build and deployment.
Step 14: Push backend code to GitLab to trigger the pipeline.
Step 15: Begin development of the frontend source code.
Step 16: Create a Dockerfile for the frontend application.
Step 17: Set up a GitLab Runner for the frontend repo on the self-hosted GitLab VM.
Step 18: Generate Azure ACR credentials for frontend image push.
Step 19: Add necessary variables to the frontend repository.
Step 20: Create a GitLab CI pipeline for frontend build and deployment.
Step 21: Push frontend code to GitLab to trigger the pipeline.
Step 22: Prepare for application deployment on AKS.
Step 23: Use Kubernetes LoadBalancer services to expose backend and frontend externally.
Step 24: Set up monitoring using Prometheus and Grafana.
Step 25: Create Kubernetes manifest files for the database (StatefulSet, PV, PVC, Secrets).
Step 26: Create manifest files for the backend (Deployment, Service).
Step 27: Create manifest files for the frontend (Deployment, Service).
Step 28: Install FluxCD on the AKS cluster.
Step 29: Bootstrap FluxCD to create its configuration repository.
Step 30: Create source and kustomization manifests for each component (DB, backend, frontend, LoadBalancer).
Step 31: Enable automated deployments—any manifest change will trigger updates via FluxCD.

Project Implementation

Click on Groups.

Click on New Group.

Click on Create group

Provide the name of your Group.

Click on Create group button.

A group has been created

Now, we need to create a subgroup for each(Terraform, Source Code & Kubernetes Manifests)

Click on "Create New Subgroup" and create three subgroups as follows.

Go to the Infra-Code-Terraform subgroup to begin setting up infrastructure.

Before proceeding, create a Personal Access Token to enable cloning and pushing to private repositories.

Since all repositories are private and GitLab Premium is not used, create a Global Personal Access Token.

To generate the token, click on your profile and go to Edit Profile

Click on Access Tokens.

Copy your token and save it securely—you’ll need it multiple times.

Go back to your Group and open the Infra-Code-Terraform subgroup.

Click on Create New Project.

Create projects (repositories) as follows.

Before creating infrastructure from each repository, review the prerequisites.

We’ve configured a remote Terraform backend, so the tfstate file will be stored in Azure Storage Accounts.

Although GitLab can store tfstate, for this project we’ll use Azure Storage instead.

To deploy infrastructure on Azure, store Azure credentials in GitLab CI/CD variables.

Now, let’s create an Azure Storage Account:

    • Go to your Azure Portal.

      * Navigate to Storage Accounts.

      * Click on Create.

Go to Resource Groups in your Azure account.

Click on Create New.

Enter a name for your Resource Group.

Click OK to create it.

After creating the Resource Group, enter a unique name for your Storage Account and click Review + Create.

Open your Azure Storage Account.

Navigate to Containers.

Click on + Container to create a new one.

Provide the name of your container and click on Create.

Now, we have completed the setup for our terraform tfstate file.

We need to add Azure credentials to the GitLab CI/CD variables section.

Currently, we have the Subscription ID and Tenant ID.

To proceed, we need to generate the Client ID and Client Secret.

In your Azure account, search for "Entra ID" and click on it.

Click on App registrations > New registrations.

Provide the name of your application and click on Register.

Once you click on Register, you’ll be taken to the App Registration Overview page, as shown in the next step.

After completing the App Registration, the app needs access to your Azure Subscription to create resources.

To grant access, we’ll assign an IAM role to the app.

In your Azure account, search for "Subscriptions" and click on it.

Click on Access Control(IAM).

Click on Add and navigate to the Add role assignment.

In the role section, Click on Privileged administrator roles and select Owner.

Go to Members and click on Select members.

In the Subscriptions section of your Azure account, go to Access Control (IAM).

Click on Add Role Assignment.

Search for the member name you used during app registration (e.g., azure-devsecops).

Select the member and click Select to proceed.

Click on Conditions.

Select the second option to grant appropriate privileges.

Click on Review + Assign to complete the role assignment.

Once the role assignment is complete, go back to App Registrations in Entra ID.

Open the app you created earlier.

Click on "Add a Certificate or Secret" to generate a client secret.

Click on New Client secret

Provide the description and click on Add.

Now, you have the client's secret.

To get the client ID, navigate to the previous page and copy the client ID.

To get the tenant ID, copy the tenant ID.

To get the subscription id, go to subscription and copy the subscription id.

Go to your Infra-Code-Terraform repository in GitLab.

Click on CI/CD from the left sidebar.

Navigate to Variables.

Add the required Azure credentials (Subscription ID, Tenant ID, Client ID, and Client Secret).

Add the following keys with the correct values from previous steps:

  • ARM_CLIENT_ID

  • ARM_CLIENT_SECRET

  • ARM_SUBSCRIPTION_ID

  • ARM_TENANT_ID

While adding each variable, enable the following flags:

  • Mask – to hide the value in pipeline logs.

  • Protected – to restrict usage to protected branches and tags.

Now, push the code from the azure-gitlab-vm directory into the Infra-Code-Terraform repository so that it triggers the pipeline.

After verifying the plan, click the play button next to 'Run Apply'.

Go to your Azure account, navigate to the Resource Group, and you will see all the services created by the pipeline.

Once the GitLab VM is created, it will serve as a self-hosted runner for all future pipelines, replacing GitLab's shared runners.

Go to azure-hashicorp-vault-vm.

Added the required variables (Azure credentials) from earlier steps.

Pushed the azure-hashicorp-vault-vm code to trigger the pipeline that creates the HashiCorp Vault VM, where our Vault will run.

Go to Azure Account and check the Resource Group to see the created services.

Now, we need to set up our HashiCorp Vault server as we are going to store our client ID and client secret for Azure Kubernetes Secret.

Login to HashiCorp Vault VM.

Run the below command to start the vault in the background.

nohup vault server -dev -dev-listen-address=”0.0.0.0:8200" > vault.log 2>&1 &

To access the vault from the console, you need to have a Token which is stored in the vault.log file that is created by running on above command.

cat vault.log | tail -10

Now, access the Vault GUI by entering the Vault server's public IP with port 8200 in your browser, and log in using the copied root token.

Once you log in, you will see UI as follow:

Now, export the vault address by running the below command.

export VAULT_ADDR=’http://0.0.0.0:8200'

We will store secrets in the Vault server, ensuring they are accessible only to authorized applications.

To manage access, HashiCorp Vault uses roles and policies.

First, we’ll enable the AppRole authentication method, then create a role and bind it to a policy that defines access permissions.

vault auth enable approle

Now, enable the secret to create secrets.

vault secrets enable -path=secrets kv-v2

In the Vault console, you can see secrets are present.

Now, create a Secret by providing the path and then add your client ID and client secret credentials.

The credentials are added.

Now, create the policy where we specify that the path of the secret can be only read by the app role.

vault policy write terraform - <<EOF
path "secrets/data/*" {
capabilities = ["read"]
}
EOF

Now, create the approle and associate it with the policy that we have created above.

vault write auth/approle/role/terraform \
secret_id_ttl=60m \
token_num_uses=60 \
token_ttl=60m \
token_max_ttl=60m \
secret_id_num_uses=60 \
token_policies=terraform

Now, we need the role ID which will help us to integrate with Terraform while creating AKS.

vault read auth/approle/role/terraform/role-id

Now, copy the secret ID by running the below command.

vault write -f auth/approle/role/terraform/secret-id

Once you obtain the Role ID and Secret ID, add them to the GitLab CI/CD variables section for your AKS project, using appropriate keys and values.

With the HashiCorp Vault VM successfully configured, the next step is to deploy Azure Kubernetes Service (AKS).

Ensure that the required Azure credentials have been added as CI/CD variables, as done in earlier steps.

Then, push the necessary code to the azure-aks repository to trigger the deployment pipeline.

Let’s go to Azure Account and check the Resource Group to see the created services

The Services are available that we have created using Terraform.

To connect our aks cluster.

Go to Azure > Azure Kubernetes Service> Connect > clous shell.

Copy the “set the cluster subscription”.

Validate whether your Azure AKS is working fine or not by running the below command.

kubectl get nodes

Next, we'll configure monitoring using Prometheus and Grafana.

Start by creating a dedicated namespace for Prometheus in your Kubernetes cluster.

kubectl create ns prometheus

Add the Prometheus helm repo and update the repo.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Install the Prometheus Helm chart

helm install prometheus prometheus-community/prometheus -n prometheus

To expose the Prometheus server externally, we'll change its service type from NodePort to LoadBalancer.
First, let's check the existing services in the Prometheus namespace.

kubectl get svc -n prometheus

Run the below command to expose the Prometheus service outside of our cluster.

kubectl expose service prometheus-server --type=LoadBalancer --target-port=9090 --name=prometheus-server-ext -n prometheus

Copy the Public IP of Prometheus server and hit it on your favorite browser to access the Prometheus Server.

Now, we will configure grafana to visualize the details of our Kubernetes Cluster.

Create a dedicated namespace for it.

kubectl create ns grafana

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install grafana grafana/grafana -n grafana

Install the grafana helm chart.

Now, expose the Grafana server to the outside of the cluster.

kubectl expose service grafana --type=LoadBalancer --target-port=3000 --name=grafana-ext -n grafana

Run the below command to get the external IP to access grafana.

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
kubectl get svc -n grafana

To access the Grafana dashboard, you will need a username and password.

To get that run the below command to get the password and username will be the admin

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Copy the Public IP of the Grafana server and hit it on your favorite browser to access the Grafana server and provide the password that we received after running the previous command.

Once you log in, you need to add the Data Sources to monitor your Kubernetes Cluster

Click on Data Sources.

Click on Prometheus as a data source.

Provide the Prometheus server URL like the below snippet and click on Save & test.

Now, click on Dashboard which is showing on the left.

Click on Create Dashboard to create a custom dashboard.

Click on the Import dashboard.

Provide 6417 ID to import dashboard.

6417 is the unique ID that has Kubernetes data to visualize it on the Grafana Dashboard.

Click on Load.

Once you land on the dashboard, select the data source as Prometheus that we configured in the earlier step and click on import.

Here you can see the dashboard.

Finally, we are going towards our Source Code deployment.

Before building Docker images for the frontend and backend, create the necessary projects in GitLab under the Source-Code-Build subgroup.

Before going to push our code to repo, we need to set up the Sonarqube.

On the Azure GitLab VM, run the docker ps command—you should see a running SonarQube container.
This setup was configured during the GitLab VM creation using Terraform.

Copy the Public IP of your Gitlab VM and paste it on your favorite browser with Port 9000

The username and password will be admin.

Regenerate the password.

Now, we need to create a project for Code Analysis for our backend code.

Click on manually.

Provide the name of your project and click on Setup.

As we are using Gitlab, So click on With GitLab CI for analyzing the repository(backend).

Select Other as the Project key.

Now, you will get one instruction in which you need to create a file named sonar-project.properties and copy and paste the content where your backend code is located.

Go Back to your Sonarqube and click on Continue if you don’tNow you need to generate the token and add it to your backend Gitlab repository

Click on Generate a token.

Copy the token and click on Continue.

Now we need to add the token on our backend Gitlab repository as follow.

Now, Go back to your Sonarqube and click on Continue after adding both variables

You will see a long file which is a code-analysis stage to add it to your .gitlab-ci.yml

This will perform the code analysis on your backend code

Copy the content

Paste the configuration into your .gitlab-ci.yml file.
After the sonarqube-check job, make sure to manually define the stage, as it won't be included by default in SonarQube's template.
Don't forget this step. For reference, you can check my .gitlab-ci.yml file—specifically, look at the third line of the script.

We’ve completed the SonarQube setup for our backend code.

We'll follow the same process for the frontend, but only after the backend pipeline has successfully completed.

Once the Docker image is built, it will be pushed to Azure's private container registry.

If you've reviewed the azure-aks repository, you'll notice that two ACRs have been created—one for the backend and one for the frontend.

To proceed, log in to your Azure account, search for Container Registry, and click on it to access the registry details.

Go to your backend registry.

Now, we need to generate the Access keys for our backend registry because we are working on Private ACR.

Click on the Access keys showing in the left pane.

Click on the checkbox of Admin user.

Once you fill the checkbox, you can see the password for our ACRs.

Now, we need to take three values for our GitLab repo variables which are username, login server, and password(any password will work).

So, copy the given values for each variable name and paste it into your GitLab backend repo’s variable section.

Next, add your DockerHub username and Personal Access Token (PAT) as CI/CD variables in your GitLab backend repository.
To generate a PAT, log in to your DockerHub account, go to your Profile, click on My Account, and then select New Access Token.

Kindly add all variables as shown in the below screenshot.

Now, push the backend code to the repository to trigger the pipeline.

Before triggering the backend pipeline, make sure to push the Kubernetes manifest files to the repository.

These manifests are required during the CI/CD process, and missing them will result in pipeline errors.

If you go to Sonarqube, you will see that Sonarqube also completed the analysis of the code.

The backend setup is complete. Now, follow the same steps for the frontend:

  • Create the GitLab project under the Source-Code-Build subgroup

  • Set up SonarQube for code analysis

  • Add credentials as CI/CD variables

Go to Sonarqube and click on Projects

SonarQube for code analysis for frontend.

Now, push the frontend code to the repository.

The Pipeline will automatically start as you can see in the below snippet. Our Pipeline was completed successfully.

Now, you can see your Code smells and vulnerabilities in your frontend Sonarqube Project.

Now, we need to set up fluxCD to deploy our application without any human intervention.

Install fluxCD on Gitlab VM.

curl -s https://fluxcd.io/install.sh | sudo bash

Validate whether fluxCD is installed or not.

flux --help
flux check --pre

Export the gitlab token.

export GITLAB_TOKEN=<PAT>

FluxCD enables automated deployment of applications by continuously monitoring changes in your Git repository. Whenever a manifest file is updated, FluxCD detects the change and applies it to your Kubernetes cluster—without any manual intervention.

The key mechanism behind this is bootstrapping. During bootstrapping, FluxCD creates a repository in your GitLab account that acts as the source of truth. It continuously watches the target repository (where your Kubernetes manifests are stored) for updates.

You can control where this FluxCD repository is created by specifying the owner (your GitLab username or group) during the bootstrap process.

flux bootstrap gitlab \
  --deploy-token-auth \
  --owner=azure-devsecops-project/kubernetes-manifests \ # Provide the Project and where do you want to create the fluxCD repository
  --repository=flux-config \ # This will create a repository in which flux will keep it’s configurations
  --branch=main \
  --path=clusters/my-cluster

Go to Gitlab and Navigate to Azure-DevSecOps-Project group. Then, Go to the Kubernetes-Manifests subgroup. You will see one flux-config repository has been created fluxCD.

Go to that repository and check the content

Now, we need to create a deployment token to access the repository where our Manifest files are stored

Go to Manifests repository and navigate to Settings-> Repository and provide the things showing in the below snippet.

Copy the username and token.

Create a secret by running the below command.

flux create secret git flux-deploy-authentication \  --url=https://gitlab.com/end-to-end-azure-kubernetes-three-tier-project/kubernetes-manifests/manifests \
  --namespace=flux-system \
  --username=<USERNAME> \
  --password=<PASSWORD>

Validate the secrets by running the below command.

kubectl -n flux-system get secrets flux-deploy-authentication -o yaml

Once you push the code to the repo, fluxCD will automatically deploy your manifests file to the Kubernetes Cluster.

kubectl create ns three-tier

You can list all the objects that are created in the three-tier namespace by running the below command

kubectl get all -n three-tier

Expose the frontend service by changing its type to LoadBalancer, and access it in your browser using the external IP assigned to the LoadBalancer.

Go to Grafana Dashboard and see the number of pods running and other stuff like CPU usage, etc.

Clean Up:

Once the deployment and setup are complete, make sure to clean up any unused resources to avoid unnecessary costs and maintain a tidy environment.

5
Subscribe to my newsletter

Read articles from Kunal Maurya directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Kunal Maurya
Kunal Maurya