Part 3 : Set Up Kubernetes Cluster with Kubeadm

JawherJawher
6 min read
1. Pre-Cluster Setup Steps
1.1. Provisioning Virtual Machines
1.2. Fixing Terminal Compatibility (Kitty)
1.3. Creating a Non-Root User for SSH and Cluster Management
1.4. Setting Up SSH Access for the New User
1.5. Configure Unique Hostnames for Each VM
2. Cluster Initialization with Kubeadm
2.1. Install and Configure the Container Runtime (containerd)
2.2. Install Kubernetes Components (kubelet, kubeadm, kubectl)
2.3. System Configuration for Kubernetes Compatibility
2.4. Initialize the Master Node
2.5. Install a Pod Network (Calico)
2.6. Join Worker Nodes to the Cluster
3. Summary & Next Step

1. Pre-Cluster Setup Steps

1.1. Provisioning Virtual Machines

To begin setting up the Kubernetes cluster using kubeadm, I created 3 virtual machines (VMs) in Microsoft Azure: one for the master node and two for worker (slave) nodes. I chose the West US 2 region and used Ubuntu 24.04 as the operating system for all machines.

Each VM is of size Standard D2as v5 (2 vCPUs, 8 GiB RAM), which meets the minimum system requirements for kubeadm, particularly the requirement for at least 2 vCPUs.

All three VMs are created under the same resource group named ResourceGR-1, and they are connected through the same Virtual Private Cloud (VPC) using the default virtual network. This ensures they can communicate internally, which is essential for Kubernetes cluster setup.

1.2. Fixing Terminal Compatibility (Kitty)

Once the virtual machines were ready, I encountered an issue when connecting via my terminal emulator Kitty. The error message:

'xterm-kitty': unknown terminal type.

This happens because the terminal sets the $TERM variable to xterm-kitty, which is not recognized by Ubuntu by default. To fix this, I installed the appropriate terminal definitions on each VM:

sudo apt update
sudo apt install kitty-terminfo

This ensures the system can interpret the terminal type correctly.

1.3. Creating a Non-Root User for SSH and Cluster Management

By default, the VMs only allowed access through the root user, which is not recommended for security reasons—especially when managing a Kubernetes cluster. To follow best practices, I created a new user named master:

adduser master

Then, I granted the new user sudo privileges:

usermod -aG sudo master

1.4. Setting Up SSH Access for the New User

Since SSH keys were initially set up only for the root user, I needed to copy the .ssh directory (which contains the authorized_keys file) from /root to the new user's home directory:

cp -r /root/.ssh /home/master/

Then I updated the ownership to match the new user:

chown -R master:master /home/master/.ssh

Finally, I applied the correct permissions:

chmod 700 /home/master/.ssh
chmod 600 /home/master/.ssh/authorized_keys

Now I can successfully connect to the server via SSH using the master user instead of root, which improves security and prepares the environment for initializing the Kubernetes cluster.

1.5. Configure Unique Hostnames for Each VM

To avoid conflicts when initializing the Kubernetes cluster and joining nodes, it is necessary to assign a unique hostname to each VM. By default, all machines are named localhost, which causes issues with kubeadm as nodes cannot share the same hostname.

Steps performed:

  1. The hostname was changed on each machine using:
sudo hostnamectl set-hostname master   # node1 and node2 on the other machines
  1. The /etc/hosts file was updated to reflect the new hostname. The line:
127.0.1.1    localhost

was replaced with:

127.0.1.1    master   # or node1, node2 depending on the VM

The line 127.0.0.1 localhost was kept unchanged, as it is required by the system for proper local networking.

  1. The system was rebooted to apply the changes:
sudo reboot

After this steps, all VMs had distinct hostnames, which is required for successful cluster setup and node registration.

2. Cluster Initialization with Kubeadm

Now I will create the Kubernetes cluster. I ran specific commands on the master node to initialize the control plane, and other commands on the worker nodes to prepare them for joining the cluster. These steps were necessary to ensure that all nodes are properly configured before forming the cluster.

#MASTER & WORKER NODE

2.1. Install and Configure the Container Runtime (containerd)

This section installs and configures containerd, the container runtime used by Kubernetes to manage containers. It enables SystemdCgroup, which is required for compatibility with kubeadm.

set -euo pipefail

sudo apt-get update
sudo apt-get install -y containerd apt-transport-https ca-certificates curl gpg

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

2.2. Install Kubernetes Components (kubelet, kubeadm, kubectl)

This part adds the official Kubernetes repository, installs the core components (kubelet, kubeadm, kubectl), and holds them to prevent automatic upgrades that could break the cluster.

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list >/dev/null

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

2.3. System Configuration for Kubernetes Compatibility

This section configures kernel parameters for networking, disables swap (required by kubeadm), and ensures everything is applied system-wide.

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF
sudo sysctl --system >/dev/null

sudo swapoff -a
sudo sed -i.bak -r '/\s+swap\s+/s/^/#/' /etc/fstab

#MASTER NODE

2.4. Initialize the Master Node

This command initializes the Kubernetes control plane (master node) using kubeadm. The --pod-network-cidr flag sets the IP range for the pod network, required by some CNI plugins like Flannel.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

2.5. Install a Pod Network (Calico)

After initializing the cluster, I needed to install a Pod network so that the pods could communicate with each other. Without a network plugin, even core services like DNS (CoreDNS) wouldn’t start properly.

I chose to use Calico as the CNI plugin and applied it with the following command:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

2.6. Join Worker Nodes to the Cluster

After setting up the master node, I joined the worker nodes (node1 and node2) to the cluster using the kubeadm join command that was generated during initialization.

I ran the following command on each worker node:

sudo kubeadm join 172.234.160.123:6443 --token ztiu5b.ot2h3zw3ecyk6gp6 \
    --discovery-token-ca-cert-hash sha256:26dda2daa2d8e5779608246ed9501dac21f02822068c77e6e78ee8525cabc060

After running the command, both nodes were successfully added to the cluster.

Node1

Node 1

Node2

To verify that all nodes were connected and in a Ready state, I ran the following on the master node:

kubectl get nodes

This showed a list of all cluster nodes (master, node1, and node2) along with their status.

Master

3. Summary & Next Step

In this part, we set up a self-managed Kubernetes cluster and configured the server first to ensure we don't encounter any issues during the setup in the next part. The next part will be the final stage of our project and blog series, where I’ll set up a full CI/CD pipeline for all three microservices — Java, Python, and Go. See you soon in the next part!

0
Subscribe to my newsletter

Read articles from Jawher directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jawher
Jawher