Deploying a Kubernetes Cluster on Azure AKS with Cluster Auto-scaling Enabled

Good day everyone visiting my Blog, this is Tech Bro once again , Documenting my Tech Journey, In today’s project, I deployed a Kubernetes cluster using Azure Kubernetes Service (AKS) with Cluster Auto-scaling enabled.
The cluster automatically adjusts the number of nodes based on application demand, improving resource efficiency and reducing costs.
Key configurations included setting minimum and maximum node counts, enabling availability zones for high availability, and securing access with SSH authentication.

Project Goal:

To deploy a resilient and scalable Kubernetes cluster on Azure AKS that can automatically adjust its node count based on workload demands..

Key Features:

Cluster Auto-scaling: The AKS cluster automatically scales the number of nodes based on application workloads, maintaining efficiency and reducing manual intervention.

High Availability: Nodes are distributed across multiple availability zones to ensure uptime and resilience against zone failures.

Secure Access: SSH key-based authentication is configured to securely manage node access.

Custom Node Pool: A dedicated node pool (aksnodepool) was created for better resource organization and scaling control.

Optimized Resource Usage: Minimum and maximum node counts are set to balance performance and cost-efficiency.

Let’s Dive.

Step 1.

Step 1. Open VSCODE on your local Machine , click on file------>Open Folder then Create a new Folder in any location of your choice on your local machine, i will create mine in the Document folder on my Local Machine and name my folder "K8s Project". after creating , then Select the folder to open it on VSCODE.

Step 2.

Now we will open a terminal on VScode and login our Azure account. to access the terminal on VSCODE ,we navigate uo beside the RUN there is a 3 dot we click on it to access the terminal then click on Terminal-------> New terminal . once we have access the and open our terminal, we login to our azure account with the command 'az login" we will get a pop up window to login into our Azure account.

PS: for this project to be successful, we need to make sure we have Azure CLI installed and minikube installed on our local machine, for faster installation, we can install chocolatey on our local machine and use choco commands to install AZURE CLI and minikube.

Diagram Below Shows our Azure account has been logged in successfully on VSCODE.

Step 3.

We will now create a .ssh directory with command mkdir .ssh after creating a our directory, we will now run the command " ssh-keygen -f .ssh/aks-ssh " this command creates a new SSH key pair (private and public keys) and saves them into the .ssh/aks-ssh file, after creating this SSH key pair we should see a prompt that shows it has been created successfully via our VSCODE software.

Step 4.

After creating the SSH key pair, Next we will create a resource group , with this command "az group create --name aksRG --location uksouth " where "aksRG is our resource group name then the location where our resource group will be situated is uksouth. once we run the command we will get a prompt with succeeded , that shows us we have successfully created our Resource group on Azure.

Step 5.

Once we have created our Resource group Successfully, Next thing is to create our AKS Cluster, the command to create our cluster is " az aks create --resource-group aksRG --name akscluster --node-count 2 --nodepool-name aksnodepool --node-resource-group mynodeRG --ssh-key-value ~/.ssh/aks-ssh.pub --network-plugin azure --enable-cluster-autoscaler --min-count 2 --max-count 3 --zones 2 " .

To breakdown this command for better understanding our existing resource group is "aksRG" then new cluster we are creating we named "akscluster" our node count is 2 our nodepool name is " aksnodepool" then we will create another Resource group for our nde which we named mynodeRG the the SSH Key pair we created , Network plugins we are using for the cluster is Azure then we enable autoscaler, minimum count is 2 , maximum count is 2 and our availability zone is 2 .

Once we run this command , we will get a prompt that our AKS cluster has been successfully created , the image below shows how it looks like.

Step 6.

After Creating our AKS cluster successfully, the next thing to do is to run the command " az aks get-credentials --resource-group aksRG --name akscluster " this commands download the Kubernetes access credentials (the kubeconfig file) from our AKS cluster and configure our local machine to connect to it. after running this command , we will get a prompt that its successfully merged.

PS: from the image uploaded, i got the overwrite prompt because i had already done this steps earlier before documenting this project,so if you are doing this for the first time on your local machine you will not get the overwrite prompt.

Step 7.

After we have successfully downloaded and connect our kubeconfig file to our local machine , we run this command " kubectl get nodes " to check the number of nodes we have in our server. we can verify that we have 2 working Nodes already installed in our server.

Step 8.

Now we will run command "kubectl get deployment --all-namespaces " this command list all Deployments (like apps, services, workloads) running in Kubernetes, IN OTHER WORDS It shows you all the apps deployed across the entire AKS cluster, no matter which namespace they are running in. Also we will run this command " kubectl get namespace -A " to view all the namespaces in the whole cluster.

Step 9.

If we feel like creating a namespace we can use this command to create a namespace kubectl create namespace frontend, where “frontend” is the name of the namespace we are newly creating. then we use the earlier command to view if our namespace has been created successfully command is kubectl get namespace -A

After creating a namespace , we will use this command " kubectl create deployment nginx-deployment --image=nginx:latest --replicas=3 -n frontend " to create a deployment , which means that we are creating a deployment named nginx-deployment then --image="nginx:latest" specifies the Docker image to use for the deployment. In this case, it's using the official Nginx image with the latest tag, meaning it will pull the most recent version of Nginx available from Docker Hub, --replicas=3 this specifies that the deployment should have 3 replicas i.e., 3 instances of the Nginx container running This ensures that there are 3 pods running with the Nginx container, providing high availability and load balancing. "-n frontend" This specifies the namespace in which the deployment should be created. The -n frontend flag means that the deployment will be created in the frontend namespace which we created earlier.

Then we can use this command " kubectl get deployments -n frontend " to verify our deployment was successfully created .

Step 10.

Then we will run this command next " kubectl create deployment nginx-deployment --image=nginx:latest --port 80 " this command is used to create a Kubernetes deployment with an Nginx container, and it specifies the port that the container will expose which is Port 80 which is the default HTTP port for Nginx Then we will run this command to expose our deployment to the internet " kubectl expose deployment nginx-deployment --name=nginx-service --type=LoadBalancer --port=80 --protocol=TCP " After our deployment has been exposed to the internet via port 80, we will use this command " " kubectl get service" to lists all the Services in our current namespace in the Kubernetes cluster.

Step 11.

Finally After Exposing our Deployment to the internet then we will copy the external IP of our nginx-service and paste it on a web browser to verify if our deployment is available for everyone to view over the internet.

Closing Note

In this project, we successfully set up and managed an Azure Kubernetes Service (AKS) cluster to deploy, scale, and manage containerized applications. By leveraging AKS, we were able to take advantage of Kubernetes in a fully managed environment, which simplified infrastructure management and allowed us to focus on the deployment and orchestration of applications.

Key highlights of the project:

  • Cluster Setup: Deployed an AKS cluster with configurations to support containerized applications.

  • Deployment Management: Used Kubernetes resources such as Deployments, Services, and Namespaces to organize and manage applications effectively.

  • Scaling: Implemented autoscaling to dynamically adjust the number of replicas based on varying workloads, ensuring optimal performance.

  • External Access: Exposed applications via LoadBalancer services for external access and high availability.

This project provided valuable hands-on experience in deploying and managing cloud-native applications in a Kubernetes environment. I gained a deeper understanding of containerization, microservices architecture, and cloud infrastructure management.

Going forward, I plan to optimize this setup, explore advanced Kubernetes features, and implement best practices for security and resource management within the AKS environment.

I hope to see you guys on the Next one.

Signing Out .

Tech Bro.

0
Subscribe to my newsletter

Read articles from Stillfreddie Techman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Stillfreddie Techman
Stillfreddie Techman