Setting Up a Kubernetes Cluster with Kubeadm on Azure Virtual Machines


Using Kubeadm to set up a Kubernetes cluster on Azure can be a very effective method for managing containerized apps. The establishment of virtual machines, network configurations, and installation procedures will all be covered in this guide. You will have a fully operational Kubernetes cluster operating on Azure Virtual Machines by the end of this course.
Part 1: Creating VMs in Azure Portal
Step 1: Log in to Azure Portal
Go to https://portal.azure.com
Sign in with your Azure account credentials
Step 2: Create a Resource Group
Click on "Resource groups”
Click "+ Create" button
Enter the following details:
Subscription: Choose your subscription
Resource group name: Give your Resource Group a name.
Region: Choose a region close to you (e.g., East US)
Click "Review + create" and then "Create"
Step 3 : Create a Virtual Network (VNet)
In the search bar, type "Virtual networks" and click it.
Click "Create" > "Virtual Network"
Fill in the basics:
Subscription: Choose your subscription
Resource Group: Create one
Name: Give your VNet a name
Region: Choose where to deploy (e.g.,
East US
)
IP Addresses tab:
Define an IPv4 address space (e.g.,
10.0.0.0/16
)Add a subnet (e.g., name it
default
, use10.0.0.0/24
)
Leave other settings as default unless you have specific needs (DNS, security, tags).
Click Review + create, then Create
Done! Your VNet will be provisioned in a minute or two.
Step 4: Create the Master Node VM
From the Azure homepage, click "+ Create a resource"
Search for "Ubuntu Server" and select "Ubuntu Server 22.04 LTS"
Click "Create"
Fill in the basic details:
Subscription: Your subscription
Resource group: Select the one you created.
Virtual machine name: Give the VM a name.
Region: Same as your resource group
Availability options: "No infrastructure redundancy required"
Image: Ubuntu Server 22.04 LTS
Size: Click "See all sizes" and select "Standard_D2s_v3" (2 vcpus, 8 GiB memory)
For Authentication:
Authentication type: SSH public key
Username:
azureuser
SSH public key source: "Generate new key pair"
Key pair name: Give a name
Click "Next: Disks"
Accept the defaults and click "Next: Networking"
For Networking:
Virtual network: Choose existing VNet
Public IP: Create new
NIC network security group: "Basic"
Public inbound ports: "Allow selected ports"
Select inbound ports: SSH (22)
Click "Next: Management" and accept defaults
Click "Next: Advanced" and accept defaults
Click "Next: Tags"
Add a tag: Key=
Role
, Value=master
Click "Review + create" and then "Create"
When prompted to "Download private key and create resource", click "Download private key and create resource"
Save the
.pem
file to a secure location
Step 5: Create Worker Node VMs (repeat twice)
Follow the same steps as for master node, but with these differences:
Virtual machine name:
worker1
(andworker2
for the second worker)For SSH key, select "Use existing key" and use the key you created for the master
Add tag: Key=
Role
, Value=worker
Make sure to use the same virtual network you created earlier
Step 6: Allow Kubernetes Ports in Network Security Group
After creating all VMs, go to "Resource groups" and select the resource group
Find the Network Security Group (NSG) resources (usually named after your VMs)
Click on the master node's NSG
Select "Inbound security rules" from the left menu
Click "+ Add" to add a new rule:
Source: Any
Source port ranges: *
Destination: Any
Service: Custom
Destination port ranges: 6443
Protocol: TCP
Action: Allow
Priority: 1001
Name: Kubernetes-API
Click "Add"
Add another rule for internal cluster communication:
Source: IP Addresses
Source IP addresses: 10.0.0.0/16
Source port ranges: *
Destination: Any
Destination port ranges: *
Protocol: Any
Action: Allow
Priority: 1002
Name: Internal-Cluster-Communication
Click "Add"
Step 7: Note Down VM IP Addresses
In your resource group, click on each VM
Note the public IP address for each VM (needed for SSH access)
Also note the private IP address for each VM (needed for cluster configuration)
Part 2: Setting Up Kubernetes on All Nodes
Step 7: SSH to Your VMs
For Windows Using PuTTY:
Convert the .pem file to .ppk:
Open PuTTYgen
Click "Load" and select your downloaded .pem file
Click "Save private key"
Save the .ppk file
Open PuTTY:
Enter the master node's public IP in the "Host Name" field
Go to Connection > SSH > Auth > Credentials
Browse and select your .ppk file
Click "Open"
When prompted, enter username:
azureuser
For Mac/Linux:
Open Terminal
Change permissions for the .pem file:
chmod 400 path/to/your-key.pem
Connect to the master node:
ssh -i path/to/your-key.pem azureuser@[MASTER_PUBLIC_IP]
Step 8: Run These Commands on ALL Nodes (Master and Workers)
SSH into each node and run these commands:
# Update the system
sudo apt-get update
sudo apt-get upgrade -y
# Disable swap (required for kubeadm)
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Install container runtime (containerd)
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y containerd.io
# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
# Load required kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Set kernel parameters for Kubernetes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
# Install kubeadm, kubelet, kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
# Add Kubernetes GPG key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add Kubernetes repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Step 9: Configure /etc/hosts on All Nodes
On each node, edit the hosts file to add information about all cluster nodes:
# Get the private IP of the current machine
PRIVATE_IP=$(hostname -I | awk '{print $1}')
echo "My Private IP: $PRIVATE_IP"
# Edit the hosts file
sudo nano /etc/hosts
Add these lines (replace with your actual private IPs):
10.0.0.4 master
10.0.0.5 worker1
10.0.0.6 worker2
Press Ctrl+X, then Y, then Enter to save.
Part 3: Initialize the Master Node
Step 10: Run These Commands on the Master Node ONLY
# Initialize the control plane
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=$(hostname -I | awk '{print $1}')
# Setup kubeconfig for the user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Install Calico network plugin for pod networking
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# Generate join command for worker nodes
kubeadm token create --print-join-command
IMPORTANT: Copy the entire kubeadm join
command output. It will look something like:
kubeadm join 10.0.0.4:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Part 4: Join Worker Nodes to the Cluster
Step 11: On Each Worker Node
Paste and run the kubeadm join
command you copied from the master:
sudo kubeadm join 10.0.0.4:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Part 5: Verify Your Cluster
Step 12: On the Master Node
# Check nodes status
kubectl get nodes
# Wait until all nodes show STATUS as "Ready"
# This might take a few minutes
You should see output similar to:
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 10m v1.29.x
k8s-worker-1 Ready <none> 5m v1.29.x
k8s-worker-2 Ready <none> 5m v1.29.x
Step 13: Test Your Cluster
# Deploy a sample NGINX application
kubectl create deployment nginx --image=nginx
# Expose the deployment
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get svc nginx
# Get the NodePort
NODE_PORT=$(kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}')
echo "NGINX is exposed on port $NODE_PORT on any worker node"
You can now access your NGINX service by visiting http://[WORKER_NODE_PUBLIC_IP]:[NODE_PORT]
Troubleshooting Tips
If kubeadm init fails: Check logs with
journalctl -u kubelet
If nodes don't join: Make sure firewall/NSG allows port 6443
If token expires: Generate a new one on master with
kubeadm token create --print-join-command
Network issues: Check if Calico pods are running with
kubectl get pods -n kube-system
If nodes show "NotReady": Check kubelet status with
systemctl status kubelet
Congratulations! You now have a functioning Kubernetes cluster with one master node and two worker nodes on Azure!
Thank you for stopping by. I hope I have been able to help you through the process.
Subscribe to my newsletter
Read articles from Ms. B directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ms. B
Ms. B
Hi, I'm a tech enthusiast who has decided to document her cloud journey as the day goes by. Stay tuned and follow me through this journey which I believe would be a wonderful experience. I'm also a team player who loves collaborating with others to create innovative solutions.