Module 2: Installing Kubernetes on Ubuntu with `kubeadm`

DevOpsLaunchpadDevOpsLaunchpad
4 min read

Quick notes before starting:

• Use at least 2 machines (1 control-plane + 1 worker).

• Minimum RAM: 2GB (4GB+ recommended).

• Run commands as a user with sudo.

• Pick a CNI early (Calico). I’ll show both options.

Control Plane Ports:
6443    Kubernetes API server
2379-2380    etcd server client API
10250    Kubelet API
10259    kube-scheduler
10257    kube-controller-manager
Worker Node Ports:
10250    Kubelet API
10256    kube-proxy
30000-32767    NodePort Services

Note:All are inbound direction ports

Step 1: Prerequisites (Run on all the nodes)

update system and install helpers

sudo apt-get update 
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release

The default behavior of a kubelet was to fail to start if swap memory was detected on a node. You MUST disable swap if the kubelet is not properly configured to use swap.

swapoff -a

To make this change persistent across reboots, make sure swap is disabled in config files like /etc/fstab

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Disable SELINUX:

apt install selinux-utils
setenforce 0

Step 2: Install Docker & containerd runtime (on all nodes)

You need to install a container runtime into each node in the cluster so that Pods can run there.

Install Docker Engine:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

To install the latest version, run:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Check the docker version:

docker --version
usermod -aG docker ubuntu
systemctl enable docker
systemctl status docker

Kubernetes 1.30 requires that you use a runtime that conforms with the Container Runtime Interface (CRI).

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.17/cri-dockerd_0.3.17.3-0.ubuntu-bionic_amd64.deb
apt install ./cri-dockerd_0.3.17.3-0.ubuntu-bionic_amd64.deb
systemctl status cri-docker
systemctl enable cri-docker

Step 3: Install kubeadm, kubelet, kubectl (on all nodes)

kubeadm: the command to bootstrap the cluster.

kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.

kubectl: the command line util to talk to your cluster.

#These instructions are for Kubernetes v1.30.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Step 4: Initialize control-plane (only on master/control-plane)

Decide CNI first. If you’ll use Calico, initialize with --pod-network-cidr=10.168.0.0/16.


kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock

When kubeadm init finishes it prints next steps and a kubeadm join ... command — copy that join command for worker nodes.

Set up kubectl for your regular user (run as the user who will manage cluster):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

Step 5: Install a CNI (networking plugin)

You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed:

Calico is a networking and network policy provider:

#Install the Tigera operator and custom resource definitions.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml

#Install Calico by creating the necessary custom resource
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/custom-resources.yaml

Confirm that all of the pods are running with the following command.

watch kubectl get pods -n calico-system

Step 6: Join worker nodes (on each worker)

To add nodes to the cluster, run the kubeadm join command with the appropriate arguments on each node. The command will output a token that can be used to join the node to the cluster.

On each worker, run the kubeadm join ... command that kubeadm init printed. It looks like:

sudo kubeadm join <CONTROL_PLANE_IP>:6443 --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>

If token expired, create a new one on control-plane:

on control-plane:

kubeadm token create --print-join-command

Run the printed join command on each worker.

Step 7 : Verify cluster

On control-plane:

kubectl get nodes
kubectl get pods -A
kubectl cluster-info

Expect Ready status for nodes. If not Ready, check kubectl describe node <node> and kubectl get pods -n kube-system.

0
Subscribe to my newsletter

Read articles from DevOpsLaunchpad directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

DevOpsLaunchpad
DevOpsLaunchpad