How I Setup a Kubernetes Cluster from the Ground Up - A Step by Step Guide

Oshaba SamsonOshaba Samson
6 min read

Kubernetes has become the backbone of modern container orchestration, powering everything from small-scale apps to enterprise grade workloads. While managed services like Amazon EKS, Google GKE, and Azure AKS make it easy to get started, they often come with trade-offs limited customization, higher costs, and a layer of abstraction that hides how Kubernetes truly works.

Setting up a self-managed Kubernetes cluster puts you in full control. You decide the Kubernetes version, tune every configuration, and choose the networking, storage, and security setup that fits your exact needs. You can run it anywhere on bare metal, in any cloud, or in a hybrid setup without worrying about vendor lock-in. Beyond flexibility, managing Kubernetes yourself gives you hands-on insight into its control plane, networking, and scaling, building expertise that managed services simply can’t offer.

In this guide, we’ll walk through setting up your own self-managed Kubernetes cluster from scratch, so you gain not just a working environment, but the deep operational understanding to run it like a pro.

Objective(s)

  • Setup self managed Kubernetes cluster

Prerequisite(s)

  1. Infrastructure Requirements
  • Servers/VMs – At least 2–3 nodes (1 control plane, 1–2 worker nodes).

  • OS – A Linux distribution such as Ubuntu 20.04+, CentOS 7+, or Debian 10+.

  • CPU & RAM (minimum):

    • Control plane: 2 CPUs, 4 GB RAM

    • Worker nodes: 1 CPU, 2GB RAM

  • Disk Space – Minimum 20 GB per node.

  • Network Connectivity – All nodes should be able to communicate over the network (preferably a private subnet).

  1. Software & Tools
  • Container Runtime – Docker, containerd, or CRI-O.

  • kubeadm – To bootstrap the Kubernetes cluster.

  • kubectl – To manage the cluster.

  • kubelet – To run Kubernetes services on nodes.

  • CNI Plugin – For networking (e.g., Calico, Flannel, Cilium).

  1. Required Ports for a Self-Managed Kubernetes Cluster

    a. Control Plane (Master Node)

  2. b. Worker Nodes

  • Login to AWS Console

  • Click on Instances

  • Click on instances

  • Type the name and choose the number of instances

  • Choose the Operating System and Version

  • Select instance type

  • select existing or create a new keypair

  • Click on the edit to open uo ports or select an existing group

  • Change the size of the disk to 30

  • Click on Launch Instance

  • The Instances

  • Login to the master node
ssh -i filename.pem ubuntu@ipaddress
  • switch to root
sudo -i
  • check the operating system
uname -a
  • check the memory
free -m
  • check the number of cpu
nproc
  • To turn off swap
swapoff -a
  • set the hostname
hostnamectl set-hostname master
exit
sudo -i
  • Update OS
apt update
  • Login to worker node
ssh -i filename.pem ubuntu@ipaddress
  • Change to root
sudo -i
  • To turn off swap
swapoff -a
  • Change host name
hostnamectl set-hostname worker
exit
sudo -i

Ping the worker node from master it will fail. To resolve this

  • Get the ip address of the worker node by
ip addr

Add the ip address of both the master and the worker nodes on the /etc/hosts of both ends

vi /etc/hosts
127.0.0.1 localhost
ipaddress of master node master
ipaddress of worker1 node worker1
ipaddress of worker2 node worker2

Note: Add ICMP to the security group

  • Ping worker node from master node
ping ip address of worker node
  • You should have output like this
ping 172.31.36.49
PING 172.31.36.49 (172.31.36.49) 56(84) bytes of data.
64 bytes from 172.31.36.49: icmp_seq=1 ttl=64 time=0.757 ms
64 bytes from 172.31.36.49: icmp_seq=2 ttl=64 time=0.671 ms
64 bytes from 172.31.36.49: icmp_seq=3 ttl=64 time=0.264 ms
64 bytes from 172.31.36.49: icmp_seq=4 ttl=64 time=0.250 ms
64 bytes from 172.31.36.49: icmp_seq=5 ttl=64 time=0.253 ms

Do the following in master and worker node

Install Container Runtime

Enable IPv4 packet forwarding

  • To route traffic between different network interfaces.
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
  • Enable overlayFS and VxLAN pod communication
sudo modprobe overlay
sudo modprobe br_netfilter
  • Setup sysctl parameters
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
  • Reboot sysctl parameters
sudo sysctl --system
  • Install cri-o
sudo apt-get update -y
sudo apt-get install -y software-properties-common gpg curl apt-transport-https ca-certificates

curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key |
    sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" |
    sudo tee /etc/apt/sources.list.d/cri-o.list

sudo apt-get update -y
sudo apt-get install -y cri-o

sudo systemctl daemon-reload
sudo systemctl enable crio --now
sudo systemctl start crio.service
  • Install crictl a CLI utility to interact with the containers created by the container runtime.
VERSION="v1.30.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-amd64.tar.gz

Install Kubeadm, kubelet, kubectl

  • Download the GPG key for the Kubernetes APT repository on all the nodes.
KUBERNETES_VERSION=1.30

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/ /" |\
  sudo tee /etc/apt/sources.list.d/kubernetes.list
  • Update the Repo
sudo apt-get update -y
  • To see available version
apt-cache madison kubeadm | tac
  • Install kubeadm, kubelet, kubectl
sudo apt-get install -y \
  kubelet=1.30.0-1.1 \
  kubectl=1.30.0-1.1 \
  kubeadm=1.30.0-1.1
  • Add hold to the packages to prevent upgrades.
sudo apt-mark hold kubelet kubeadm kubectl
  • Add the node IP to KUBELET_EXTRA_ARGS
sudo apt-get install -y jq

su
local_ip="$(ip --json addr show eth0 | jq -r '.[0].addr_info[] | select(.family == "inet") | .local')"
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$local_ip
EOF

exit
  • If you are using a Private IP for the master Node

Set the following environment variables. Replace 10.0.0.10 with the private IP of your master node

IPADDR="10.0.0.10"
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"
  • To test
lsmod | grep -i br_netfilter
dpkg -l | grep -i kube*
  • Initialize kubeadmin on the master node only
Run kubeadm init

To setup kube config file

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install CNI plugins

 kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
  • To test
kubectl get pods -A
  • Print out the join command on master node
kubeadm token create --print-join-command
  • Copy the join command and run on the worker node.
sudo kubeadm join 172.31.42.76:6443 --token p6bwbp.ksphsqzvpm9ty8w0 \
--discovery-token-ca-cert-hash sha256:07b84f2df9199d956c3288682757004e2870fb12fad9b3854ae8ce66748d72c5
  • Create a new Deployment
kubectl create deployment web-app --image=oshabz/website:latest --dry-run=client -o yaml > web_deployment.yaml
kubectl apply -f web_deployment.yaml
  • Create a new Service
kubectl expose deployment web-app --type=NodePort --port=80 --target-port=80
  • Get all svc
kubectl get svc -o wide

  • Open the port: 31410 on security group

  • Copy the public ip of the worker node

ip-address:31410

0
Subscribe to my newsletter

Read articles from Oshaba Samson directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Oshaba Samson
Oshaba Samson

I am a software developer with 5 years + experience. I have working on web apps ecommerce, e-learning, hrm web applications and many others