Setting up k8s using kubeadm


Chapter 1: The Blueprint - Understanding Our Setup
Our Architecture
We will build a cluster using your three VMs with public IPs:
1 Control-Plane Node (Master):
<master-IP>
(hostname:k8s-master
)2 Worker Nodes:
<worker1-IP>
(hostname:k8s-worker1
)<worker2-IP>
(hostname:k8s-worker2
)
The Tools We'll Use
kubeadm
: The official Kubernetes tool for bootstrapping a cluster.kubelet
: The agent that runs on every node to manage containers.kubectl
: The command-line tool for interacting with your cluster from the master node.containerd
: The industry-standard container runtime that executes the containers.Calico: Our chosen Container Network Interface (CNI) plugin for pod networking.
Networking : we must configure to allow the nodes to communicate (Allowing Inbound Traffic).
NOTE: Replace the IP tags with your respective IP addresses of VM’s.
Chapter 2: Prerequisites & Networking
This section is the most critical for an error-free setup in a cloud environment.
VM Specifications
Ensure your three VMs meet these minimums:
OS: Ubuntu 24.04 LTS
Size: 2 vCPUs, 4GB RAM or larger is recommended.
Networking
Your VMs cannot communicate with each other until you create inbound security rules in their respective Security Groups.
For the k8s-master VM ( ):
Go to your k8s-master VM in the respective cloud provider-> Networking -> Add inbound port rule. Create the following rules:
Priority | Name | Port | Protocol | Source | Destination | Action |
100 | SSH | 22 | TCP | Your IP | Any | Allow |
110 | Kube-API | 6443 | TCP | <worker1-IP> , <worker2-IP> ,<master-IP> | Any | Allow |
120 | Calico-BGP | 179 | TCP | <worker1-IP> , <worker2-IP> | Any | Allow |
130 | Calico-IPIP | All | IPIP | <worker1-IP> , <worker2-IP> | Any | Allow |
Why these rules?
SSH (Port 22): Allows you to connect to your VM for management. Restricting the source to "Your IP" is more secure.
Kube-API (Port 6443): This is the most important rule. It allows the worker nodes to contact the Kubernetes API server on the master. The source is locked down to your workers' specific IPs.
Calico Rules (Port 179/IPIP): Calico uses BGP and IPIP encapsulation for pod networking. This allows the nodes to route traffic for pods between each other. We must allow this communication from the worker IPs. Note: In the networking settings, for the IPIP rule, you might need to select "Any" for protocol and manually add a description.
For BOTH Worker VMs (k8s-worker1 & k8s-worker2):
For each worker VM, add the following inbound rules to their NSGs.
Priority | Name | Port | Protocol | Source | Destination | Action |
100 | SSH | 22 | TCP | Your IP | Any | Allow |
110 | NodePort-Range | 30000-32767 | TCP | Any | Any | Allow |
120 | Calico-BGP | 179 | TCP | <master-IP> , Other Worker IP | Any | Allow |
130 | Calico-IPIP | All | IPIP | <master-IP> , Other Worker IP | Any | Allow |
Why these rules?
NodePort-Range (Ports 30000-32767): This allows you to access applications running in the cluster from the internet when using a
NodePort
service.Calico Rules: Similar to the master, each worker must be able to communicate with the master and the other worker node for pod networking. When configuring the rule for
k8s-worker1
, the source IPs should be<master-IP>
and<worker2-IP>
. Fork8s-worker2
, the source IPs should be<master-IP>
and<worker1-IP>
.
Chapter 3: Node Preparation (Common Steps)
The following steps must be performed on ALL THREE nodes: k8s-master
, k8s-worker1
, and k8s-worker2
.
Step 1: Set Hostnames & DNS Resolution
Action: On ALL Three Nodes
Set Hostnames:
On
<master-IP>
:sudo hostnamectl set-hostname k8s-master
On
<worker1-IP>
:sudo hostnamectl set-hostname k8s-worker1
On
<worker2-IP>
:sudo hostnamectl set-hostname k8s-worker2
Why? Unique hostnames make identifying nodes in your cluster much easier.
Update
/etc/hosts
file:Bash
sudo tee -a /etc/hosts > /dev/null <<EOF <master-IP> k8s-master <worker1-IP> k8s-worker1 <worker2-IP> k8s-worker2 EOF
Why? This provides local DNS, allowing nodes to find each other by hostname (e.g.,
k8s-master
), which is required bykubeadm
.
Step 2: Disable Swap
Bash
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- What & Why: This disables swap memory, both now and on future reboots. The Kubernetes
kubelet
manages memory directly and requires swap to be off to function correctly.
Step 3: Enable Kernel Modules & Configure Sysctl
Bash
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
- What & Why: This loads two kernel modules,
overlay
(for container storage) andbr_netfilter
(for network bridging). Both are required forcontainerd
and Kubernetes networking.
Bash
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
- What & Why: This configures kernel parameters to allow Linux
iptables
to correctly see bridged traffic and enables IP forwarding. This is a mandatory prerequisite for the pod network to function.
Step 4: Install containerd
Runtime
Bash
sudo apt-get update
sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
- What & Why: This installs
containerd
, creates a default configuration, and then modifies it to use theSystemdCgroup
driver. This is critical becausekubelet
also uses this driver, and they must match. Finally, it restarts and enables thecontainerd
service.
Step 5: Install Kubernetes Packages (kubeadm
, kubelet
, kubectl
)
Bash
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
- What & Why: This sequence securely adds the official Kubernetes package repository to your system, installs the three key tools (
kubeadm
,kubelet
,kubectl
), and then "holds" their versions withapt-mark hold
. Holding prevents accidental upgrades that could break the cluster, as Kubernetes versions must be managed carefully.
Chapter 4: The Control Plane
These steps are performed ONLY on your master node (k8s-master
, <master-IP>
).
Step 1: Initialize the Cluster with kubeadm
Action: On the k8s-master
Node ONLY
Bash
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=k8s-master
What it does: This is the main command to bootstrap the control plane. It runs preflight checks, generates security certificates, and starts the core Kubernetes components.
Flags Explained:
--pod-network-cidr=10.244.0.0/16
: This sets the internal IP range for your pods. We use this specific range because it's what our CNI, Calico, expects by default.--control-plane-endpoint=k8s-master
: This provides a stable endpoint (our hostname) for the workers to find the API server.
Important! After this command finishes, it will print two things. Copy them to a safe place.
A block of commands to configure
kubectl
.A
kubeadm join
command with a token. This is how your workers will join the cluster.
Step 2: Configure kubectl
Access
Action: On the k8s-master
Node ONLY
Bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- What & Why: This copies the cluster's admin configuration file into your user's home directory. The
kubectl
command automatically looks for this file to get the credentials and address needed to communicate with the cluster.
Chapter 5: Weaving the Network
The control plane is up, but pod networking isn't active yet. We need to install our CNI.
Action: On the k8s-master
Node ONLY
Bash
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml
- What & Why: This command uses
kubectl
to download and apply the Calico manifest. This YAML file defines all the necessary components (Pods, Services, etc.) for Calico. Kubernetes then creates these components, which will manage your pod network, allowing pods on different nodes to communicate.
Chapter 6: Joining the Workers
Now, let's connect the worker nodes to our master.
Action: On BOTH k8s-worker1
and k8s-worker2
Nodes
Use the kubeadm join
command that you saved earlier. It will look similar to this, but with your unique token and hash:
Bash
kubeadm join k8s-master:6443 --token 4tw5tj.2dyk9jf4ohn70mnc \
--discovery-token-ca-cert-hash sha256:d7023a2d4467d026449c2cc702cf2670e42ac6d60fc8372a1fbdd15a5640b72c
- What & Why: This single command tells the worker node to contact the master's API server (
k8s-master:6443
), authenticate using the secure token, and validate the master's identity with the certificate hash. It then configures the localkubelet
to officially join the cluster.
Lost the join command? Don't worry. Just run this on the
k8s-master
node to generate a new one:kubeadm token create --print-join-command
Chapter 7: Verification and Testing
Let's confirm the cluster is fully operational from the master node.
Action: On the k8s-master
Node ONLY
Step 1: Check Node Status
Bash
kubectl get nodes -o wide
It may take a minute for the STATUS to become Ready
. You should see an output like this:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane 15m v1.30.x 10.0.0.4 <master-IP> Ubuntu 22.04.4 LTS 6.5.0-1021-* containerd://1.6.31
k8s-worker1 Ready <none> 5m v1.30.x 10.0.0.5 <worker1-IP> Ubuntu 22.04.4 LTS 6.5.0-1021-* containerd://1.6.31
k8s-worker2 Ready <none> 5m v1.30.x 10.0.0.6 <worker2-IP> Ubuntu 22.04.4 LTS 6.5.0-1021-* containerd://1.6.31
Step 2: Deploy and Expose a Test Application
Create an NGINX Deployment:
kubectl create deployment nginx-test --image=nginx
Expose the Deployment with a NodePort:
kubectl expose deployment nginx-test --type=NodePort --port=80
Find the NodePort:
kubectl get service nginx-test
The output will show a port mapping like 80:3XXXX/TCP. Note the 3XXXX port number.
Access the NGINX Server:
Open a web browser or use curl from any machine to access your NGINX server. Use the public IP of either worker node and the NodePort.
Bash
# Example using worker1's public IP and an example port of 31234 curl http://<worker1-IP>:31234
If you get the "Welcome to nginx!" response, you have successfully built and configured a K8s using kubeadm!
Accessing k8s locally
To access the k8s using locally , ~/.kube/config
file is used. By default, it's named config
and lives in a directory called .kube
in your user's home directory (~/.kube/config
).
This YAML file acts like a phonebook for kubectl
. It contains all the information needed to connect to and authenticate with one or more Kubernetes clusters, including:
Cluster Address: The API server's URL (e.g.,
https://20.244.3.31:6443
).User Credentials: The security certificate and key to prove who you are.
Context: A nickname that ties a user to a cluster, allowing you to easily switch between them.
The goal is to securely copy this file from your master VM to your local computer.
Step-by-Step Guide to Local Access
Follow these four steps on your local computer (not in the SSH session).
Step 1: Install kubectl
on Your Local Computer
If you don't already have it, you need to install the kubectl
tool.
Windows: Open PowerShell and run
winget install -e --id Kubernetes.kubectl
.macOS: Open Terminal and run
brew install kubectl
.Linux (Ubuntu/Debian): Open a terminal and run
sudo apt-get update && sudo apt-get install -y kubectl
.
Step 2: Securely Copy the kubeconfig
File
Now, you'll copy the admin configuration file from your master VM to your local machine.
Open a new terminal or PowerShell window on your local computer.
Run the following
scp
(Secure Copy) command.
Make the File Readable (on the Master VM):
SSH into your master VM and run this command. It uses sudo
to copy the file to your home directory and changes its ownership to your current user.
# Run this command on your k8s-master VM via SSH
sudo cp /etc/kubernetes/admin.conf ~/admin.conf && sudo chown $(id -u):$(id -g) ~/admin.conf
Copy the File (from your Local PC):
Now, from your local Ubuntu machine, run the scp
command again, but point it to the new, accessible file located in the home directory (~/admin.conf
).
# Run this command on your local PC
scp -i k8s_vm_key.pem k8s-master@20.244.3.31:~/admin.conf ~/.kube/config
This will successfully copy the file. For security, you can log back into the master VM and delete the temporary copy with rm ~/admin.conf
.
This will overwrite any existing ~/.kube/config
file. If you manage other clusters (like Docker Desktop), see the "Managing Multiple Clusters" section below.
Step 3: Update the Master Node's Firewall (Crucial)
Your local computer now knows how to talk to the cluster, but the vm firewall will block it. You need to tell the master node's Networking to allow connections from your local computer's IP address.
- Find Your Public IP: On your local computer, run the following command
curl ipinfo.io/ip
Go to Cloud Networking Portal: Navigate to the Networking tab for your master VM (
k8s-master-nsg
).Edit the
Kube-API
rule: Click on the inbound rule for port6443
(the one with Priority 110).Add Your IP to the Source: In the "Source IP addresses/CIDR ranges" field, add your local public IP to the list. The field should now contain the IPs for your workers, the master itself, and your new local IP.
Save the rule.
Why the IPs with the ip addr
command Won't Work ?
The IP addresses like (192.168.x.x
, 172.17.x.x
, etc.) are all private IPs. They are only used for communication inside your local network (your home or office).
When you connect to your VM over the internet, your request goes through your internet router, which swaps your private IP for your network's single public IP address. The firewall only sees this public IP.
Alternatively, you can open a web browser and search for "what is my IP".
Step 4: Adding VM's public IP to /etc/hosts
Your local machine doesn't know the IP address for the hostname k8s-master
.
Your kubeconfig
file tells kubectl
to connect to the server at the address https://k8s-master:6443
. While you configured the hostname k8s-master
in the /etc/hosts
files on your VMs, you haven't done the same on your local computer. As a result, your local machine's DNS resolver fails to find where k8s-master
is located.
Edit Your Local hosts
File
You need to manually tell your local machine what IP address k8s-master
corresponds to.
Open the
/etc/hosts
file on your local Ubuntu machine with root privileges.Bash
sudo nano /etc/hosts
Add the following line to the bottom of the file. This maps the master VM's public IP to its hostname.
<master-IP> k8s-master
Save the file and exit by pressing
Ctrl+X
, thenY
, and thenEnter
.
No restart is needed. The change takes effect immediately.
Step 5: Verify the Connection
You're all set! To confirm it's working, run this command on your local computer's terminal:
Bash
kubectl get nodes -o wide
If it successfully returns the list of your three cluster nodes, you are now managing your K8s cluster directly from your local machine.
Managing Multiple Clusters
If you work with more than one cluster (e.g., one on AWs, one on Azure, one from Docker Desktop), simply overwriting the ~/.kube/config
file is not ideal. kubectl
can manage multiple connection profiles, called "contexts."
You can see all your configured contexts by running:
kubectl config get-contexts
The one with an asterisk (*
) is the one you are currently connected to.
To switch between your different clusters, use the use-context
command:
Bash
# View All Contexts
kubectl config get-contexts
# Get current context
kubectl config current-context
# Switch to your new cluster
kubectl config use-context kubernetes-admin@kubernetes
# Switch back to a Docker Desktop cluster (example name)
kubectl config use-context docker-desktop
This allows you to seamlessly manage any number of clusters from a single, convenient location: your own computer.
Subscribe to my newsletter
Read articles from Arief Shaik directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Arief Shaik
Arief Shaik
I’m a passionate DevOps and cloud enthusiast with hands-on experience in building and automating modern infrastructure using tools like Docker, Terraform, Jenkins, and GitHub Actions. My core skill set includes Java, Python, shell scripting, and deploying containerized applications on Azure and AWS. I actively work on real-world projects involving CI/CD, infrastructure as code, cloud deployment, and Linux automation. Driven by curiosity and consistency, I enjoy turning complex problems into simple, automated solutions. I’m always exploring new technologies and looking to contribute to open-source projects and collaborate with the developer community.