🚀 Set Up a Kubernetes Cluster on Azure VMs Using Kubeadm


Imagine you're a DevOps engineer at a growing startup. Your team just built a microservices-based application and needs to deploy it in a scalable, fault-tolerant environment. You're using Azure, and you want complete control over Kubernetes—custom networking, versioning, and security policies.
Managed Kubernetes services like AKS are convenient but limit customization. So, you choose to manually set up a Kubernetes cluster on Azure VMs using kubeadm. This guide walks you from spinning up VMs on Azure to a fully operational Kubernetes cluster.
đź”§ Part 1: Create Virtual Machines in Azure
Step 1: Log In
- Open the Azure portal and sign in with your account
Step 2: Create a Resource Group
Search for “Resource groups” and click “+ Create”
Fill in:
Subscription: your subscription
Name
Region: choose a region near you
Click “Review + create” then “Create”
Step 3: Create a Virtual Network (VNet)
Search for “Virtual networks” and click “+ Create”
Fill in:
Name: whatever you would like to name it. In my case that’s VNRG2
Address space:
10.0.0.0/16
Subnet:
Name:
default
Range:
10.0.0.0/24
Click “Review + create” then “Create”
Step 4: Create the Master Node
Click “+ Create a resource”
Choose Ubuntu Server 22.04 LTS
Fill in:
Name:
master
Size:
Standard_D2s_v3
(2 vCPU, 8GB RAM)Authentication: SSH, generate new key pair
Networking:
VNet: choose the one you created
Public IP: create new
Allow SSH (port 22)
Tag:
Role = master
Download and save the .pem key
Step 5: Create Two Worker VMs
Repeat Step 4 for
worker1
andworker2
Use existing SSH key
Tag:
Role = worker
Step 6: Open Required Ports
In the master VM’s Network Security Group (NSG):
Rule 1 – Kubernetes API
Port: 6443 | Protocol: TCP | Action: Allow | Priority: 1001
Rule 2 – Internal Communication
Source IP:10.0.0.0/16
| All ports | Action: Allow | Priority: 1002
Step 7: Write Down IPs
Record:
Public IPs (for SSH)
Private IPs (for cluster internal communication)
đź–Ą Part 2: Install Kubernetes and Tools on All Nodes
Step 8: SSH into Each VM
Windows (PuTTY)
Convert.pem
to.ppk
using PuTTYgen
SSH into VM using its public IP
Mac/Linux
chmod 400 your-key.pem ssh -i your-key.pem azureuser@[VM_PUBLIC_IP]
Step 9: Run These Commands on All Nodes
# Update system
sudo apt update && sudo apt upgrade -y
# Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Install containerd
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y containerd.io
# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd && sudo systemctl enable containerd
# Load kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Set kernel parameters
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sudo sysctl --system
# Install kubeadm, kubelet, kubectl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Step 10: Update /etc/hosts
on All Nodes
sudo nano /etc/hosts
Add lines (using your nodes’ actual private IPs):
10.0.0.4 master
10.0.0.5 worker1
10.0.0.6 worker2
đź§ Part 3: Initialize the Cluster
Step 11: Initialize Master Node (only on Master)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=$(hostname -I | awk '{print $1}')
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Apply Calico for networking
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# Generate join command for workers
kubeadm token create --print-join-command
Copy the full kubeadm join ...
command shown.
đź§© Part 4: Join Worker Nodes to the Cluster
Step 12: Run the Join Command on Each Worker
sudo kubeadm join [MASTER_PRIVATE_IP]:6443 --token [TOKEN] --discovery-token-ca-cert-hash sha256:[HASH]
Example:
sudo kubeadm join 10.0.0.4:6443 --token abc.def --discovery-token-ca-cert-hash sha256:...
âś… Part 5: Test the Cluster
Step 13: Verify on Master Node
kubectl get nodes
You should see:
NAME STATUS ROLES AGE VERSION
master Ready control-plane x v1.29.x
worker1 Ready <none> x v1.29.x
worker2 Ready <none> x v1.29.x
Step 14: Deploy a Test App
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc nginx
Get the NodePort:
kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}'
Visit in a browser:
http://[WORKER_PUBLIC_IP]:[NODE_PORT]
🎉 You Did It!
Your Kubernetes cluster is now running on Azure VMs with kubeadm—ready for production-grade, scalable container deployment.
Subscribe to my newsletter
Read articles from Di Nrei Alan Lodam directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
