kOPS -Kubernetes for Production ready Set-Up
What is kOps?
kOps, also known as Kubernetes operations, is an open-source project which helps you create, destroy, upgrade, and maintain a highly available, production-grade Kubernetes cluster. Depending on the requirement, kOps can also provision cloud infrastructure.
The easiest way to get a production grade Kubernetes cluster up and running. Kops It is used to create and manage a production-grade Kubernetes cluster on the cloud. It's integrated with AWS and Google Cloud (GCE), and some other platforms
Warning: Please refer official documentation for more advanced info, before proceed into production setup.
Let's dive into kOps Setup...
Server Configuration
Run 3 VMs : 1 -Master, 2 -Worker
Operating system version: Ubuntu Server 22.04
Node | IP | hostname |
Master | 192.168.197.181 | k8s-master.demo.local |
Worker1 | 192.168.197.182 | k8s-worker1.demo.local |
Worker2 | 192.168.197.183 | k8s-worker2.demo.local |
Note: here we are using ubuntu 22 server edition 3 VMs, you can also choose others like centos, make sure you change the commands accordingly
Make sever setup for kOps
After installing the operating system on the device, we conduct operations on software Update devices to the latest version (performed on all node)
sudo apt update
sudo apt upgrade -y
After the update is complete, we restart the server
sudo reboot
Set hostname and update host , restart the server
Master node
sudo hostnamectl set-hostname "k8s-master.demo.local"
init 6 # Restart the server
Worker 1
sudo hostnamectl set-hostname "k8s-worker1.demp.local"
init 6 # Restart the server
Worker 2
sudo hostnamectl set-hostname "k8s-worker2.demo.local"
init 6 # Restart the server
Next we update the file /etc/hosts
file of all nodes.
Then add below the content file below:
#Vim /etc/hosts
192.168.197.181 k8s-master.demo.local k8s-master
192.168.197.182 k8s-worker1.demo.local k8s-worker1
192.168.197.183 k8s-worker2.demo.local k8s-worker2
Disable swap and kernel update
Note:
The commands below are executed on all node
sudo swapoff -a
Check that the swap is disabled or not with the command free -h
, if successful results will be as follows:
$ free -h
total used free shared buff/cache available
Mem: 7.7Gi 167Mi 7.1Gi 1.0Mi 437Mi 7.3Gi
Swap: 0B 0B 0B
Next disable swap in /etc/fstab
sudo vim /etc/fstab
Find the line: /swap.img none swap sw 0 0
and update to comment #:
#/swap.img none swap sw 0 0
Then run the commands:
sudo mount -a
free -h
Load the following kernel modules on all the nodes:
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Set the following Kernel parameters for Kubernetes.
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
Then reload again sysctl
sudo sysctl --system
Install container-d run time
Note:
Run the commands below on all node
#depandacy packages
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
#Add Repo
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
#Update system
sudo apt update
#Install containerd
sudo apt install -y containerd.io
After the installation is complete, we add container d configuration.
#Configure container
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
#Modify config
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
#Start container
sudo systemctl restart containerd
sudo systemctl enable containerd
Install Kubernetes
Note:
Run the commands below on all node
#Add Repo
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
#Update system packgaes
sudo apt update
#Install Kubelet kubeadm kubectl
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Initialize the cluster with kubeadm
Note:
Only run the channel below on master node
Inside 10.20.0.0/16
is the CIDR of pod network, you can replace on demand.
If the run is successful, the return result will be as follows
sudo kubeadm init \
--pod-network-cidr=10.20.0.0/16 \
--control-plane-endpoint=k8s-master.demo.local
Next, execute the commands below on the master node
#Output
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master.demo.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master.demo.local localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master.demo.local localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.503422 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master.demo.local as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master.demo.local as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: dfii4z.h3qu0s6ircq3pa0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap,RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regularuser:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listedat:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following asroot:
kubeadm join k8s-master.demo.local:6443 --token dfii4z.h3qu0s6ircq3pa0 \
--discovery-token-ca-cert-hash sha256:58b9cc96ed57a5797fddea653756dbda830efbff55b720a10cffb3948d489148 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master.demo.local:6443 --token dfii4z.h3qu0s6ircq3pa0 \
--discovery-token-ca-cert-hash sha256:58b9cc96ed57a5797fddea653756dbda830efbff55b720a10cffb3948d489148
Run the following commands for configure the cluster
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
We try running the command to look at the status of the cluster
Return result:
#Get cluster Info
karthick@k8s-master:~$ kubectl cluster-info
#output
Kubernetes control plane is running at https://k8s-master.demo.local:6443
CoreDNS is running at https://k8s-master.demo.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
#Get Node info
karthick@k8s-master:~$ kubectl get nodes
#Output
NAME STATUS ROLES AGE VERSION
k8s-master.demo.local NotReady control-plane 40m v1.26.1
We see that the control plane is running and currently there is only one master node, we will proceed to add worker node to this cluster.
Add worker node to the cluster
Note: The commands below only run on node worker
We review the output of the sudo kubeadm init
command above and copy the command below the section
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master.demo.local:6443 --token dfii4z.h3qu0s6ircq3pa0 \
--discovery-token-ca-cert-hash sha256:58b9cc96ed57a5797fddea653756dbda830efbff55b720a10cffb3948d489148
After the return result on terminal will take the following
karthick@k8s-worker1:~$
karthick@k8s-worker1:~$
karthick@k8s-worker1:~$ sudo kubeadm join k8s-master.demo.local:6443 --token dfii4z.h3qu0s6ircq3pa0 \
--discovery-token-ca-cert-hash sha256:58b9cc96ed57a5797fddea653756dbda830efbff55b720a10cffb3948d489148
#Output
[sudo] password for anh:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Check the node status kubectl get nodes
karthick@k8s-master:~$
karthick@k8s-master:~$ kubectl get nodes
#Output
NAME STATUS ROLES AGE VERSION
k8s-master.demo.local NotReady control-plane 57m v1.26.1
k8s-worker1.demo.local NotReady <none> 90s v1.26.1
k8s-worker2.demo.local NotReady <none> 45s v1.26.1
Show node has been successfully added to the cluster.
Install Calico Pod Network for Kubernetes cluster
Note: The commands only run on master node
First we download file manifest as file yaml, this is the Calico installation file on Kubernetes cluster.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
Open the newly downloaded file and find the section CALICO_IPV4POOL_CIDR
---
#Un comment below lines
- name: CALICO_DISABLE_FILE_LOGGING
value: 'true'
We revise the IP range to match the CIDR of the network pod in the command
sudo kubeadm init
. In his example is 10.20.0.0/16
so the file after modify like below
---
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: '10.20.0.0/16'
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: 'true'
Then Now we Apply Calico on the Kubernetes cluster:
karthick@k8s-master:~
karthick@k8s-master:~ kubectl apply -f calico.yaml
#output
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
We will check to see if calico has been deploy successful by checking the pods on namespace kube-system
karthick@k8s-master:~$
karthick@k8s-master:~$ kubectl get pods -n kube-system
#output
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-34b86c56s-hfdac 1/1 Running 0 4m68s
calico-node-6rfjl 1/1 Running 0 4m68s
calico-node-ddsjz 1/1 Running 0 4m68s
calico-node-ar45d 1/1 Running 0 4m68s
coredns-890c2560in-c2w3a 1/1 Running 0 70m
coredns-890c2560in-pq3cs
If the status as Running means has been successful, now if checking the status of the node then the status will be Ready.
karthick@k8s-master:~$
karthick@k8s-master:~$ kubectl get nodes
#ouput
NAME STATUS ROLES AGE VERSION
k8s-master.demo.local Ready control-plane 72m v1.26.1
k8s-worker1.demo.local Ready <none> 5m78s v1.26.1
k8s-worker2.demo.local Ready <none> 5m0s v1.26.1
Follow for more: ๐ Karthick D
LinkedIn: https://www.linkedin.com/in/karthick-dkk/
Medium: https://karthidkk123.medium.com/
Github: https://github.com/karthick-dkk/
Hashnode: https://karthick-dk.hashnode.dev/
Subscribe to my newsletter
Read articles from Karthick D directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Karthick D
Karthick D
Linux Professional loving with Information Security, DevSecOps, and blogging.