Building a Kubernetes Cluster from Scratch: A Detailed Guide
Tuanh.net
4 min read
1. Prerequisites
Before we dive into the process, ensure you have the following prerequisites:
+ Linux servers: At least three servers (1 master node, 2 worker nodes) running Ubuntu 20.04 LTS.
+ Access to the internet: The nodes will need internet access to download and install packages.
+ Linux servers: At least three servers (1 master node, 2 worker nodes) running Ubuntu 20.04 LTS.
+ Access to the internet: The nodes will need internet access to download and install packages.
2. Environment Setup
Ensure that these 3 virtual or physical machines can see each other. If you don't have 3 static IPs for them, you can set them up on the same LAN as follows:
+ Master Node: master-node (IP: 192.168.1.100)
+ Worker Node 1: worker-node-1 (IP: 192.168.1.101)
+ Worker Node 2: worker-node-2 (IP: 192.168.1.102)
+ Master Node: master-node (IP: 192.168.1.100)
+ Worker Node 1: worker-node-1 (IP: 192.168.1.101)
+ Worker Node 2: worker-node-2 (IP: 192.168.1.102)
2.1 Update and Upgrade the Servers
Start by updating and upgrading all your servers:
sudo apt-get update
sudo apt-get upgrade -y
2.2 Install Docker
Kubernetes uses Docker as its container runtime. Install Docker on all three nodes:
sudo apt-get install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
2.3 Install Kubernetes Components
We need to install kubeadm, kubelet, and kubectl on all nodes:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
2.4 Initialize the Kubernetes Master Node
On the master node, we'll initialize the cluster. This process sets up the control plane:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Once the initialization is complete, you'll see a command to join the worker nodes to the cluster. Save this command as we'll need it later.
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
When I set up the initial master node, I encountered several errors.
container runtime is not running: output: level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
So I do
sudo apt remove containerd
sudo apt update
sudo apt install containerd.io
sudo rm /etc/containerd/config.toml
sudo systemctl restart containerd
When
E0616 21:27:18.529602 14900 run.go:74] "command failed" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
The kubelet is operating with swap enabled, whereas Kubernetes explicitly discourages the use of swap.
So I do
sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab
sudo systemctl restart kubelet
sudo systemctl status kubelet
If you encounter a port error
sudo fuser -k 6443/tcp
sudo fuser -k 10259/tcp
sudo fuser -k 10257/tcp
sudo fuser -k 10250/tcp
sudo fuser -k 2397/tcp
sudo fuser -k 2380/tcp
sudo rm /etc/kubernetes/manifests/kube-apiserver.yaml
sudo rm /etc/kubernetes/manifests/kube-controller-manager.yaml
sudo rm /etc/kubernetes/manifests/kube-scheduler.yaml
sudo rm /etc/kubernetes/manifests/etcd.yaml
sudo rm -r /var/lib/etcd
3.1 Configure kubectl
To start using your cluster, configure kubectl for the current user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.2 Deploy a Pod Network
Kubernetes requires a Pod network to allow communication between nodes. We'll use Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
4. Join Worker Nodes to the Cluster
On each worker node, run the command provided by the master node after the initialization. It should look something like this:
sudo kubeadm join 192.168.1.100:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Read more at : Building a Kubernetes Cluster from Scratch: A Detailed Guide
0
Subscribe to my newsletter
Read articles from Tuanh.net directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by