Import Kubeadm Cluster to Rancher Manager

Table of contents

In this article I will show you how to import existing Kubeadm cluster to Rancher Manager. Rancher Manager providing a single web interface to monitor, manage, and operate all cluster. It gives you centralized dashboards, unified RBAC, application catalogs, and consistent monitoring across your entire Kubernetes infrastructure. All without changing how your cluster operates underneath. This is Prerequisites cluster for import to rancher manager.
So, let's get started…
Environment
- RKE2 Cluster and Rancher Manager
Hostname | at-rke2-1 (master node) |
Operating System | Ubuntu 22.04 (Jammy) |
vCPU | 8 (too large for testing) |
Memory | 12 GB (too large for testing) |
Disk | 60 GB |
Network | 172.20.20.65 |
Hostname | at-rke2-2 (worker node) |
Operating System | Ubuntu 22.04 (Jammy) |
vCPU | 4 |
Memory | 8 GB |
Disk | 40 GB |
Network | 172.20.20.66 |
Hostname | at-rke2-3 (ingress node) |
Operating System | Ubuntu 22.04 (Jammy) |
vCPU | 2 |
Memory | 4 GB |
Disk | 30 GB |
Network | 172.20.20.67 |
- Kubeadm Cluster
Hostname | at-kubeadm (master node) |
Operating System | Ubuntu 22.04 (Jammy) |
vCPU | 4 |
Memory | 8 GB |
Disk | 60 GB |
Network | 172.20.20.75 |
Hostname | at-kubeadm-2 (worker node) |
Operating System | Ubuntu 22.04 (Jammy) |
vCPU | 2 |
Memory | 4 GB |
Disk | 40 GB |
Network | 172.20.20.76 |
Import Kubeadm Cluster to Rancher Manager
- Mapping hosts
# exec on rke2 cluster nodes
nano /etc/hosts
---
172.20.20.65 at-rke2-1 at-rke2-1.at.lab rancher.at.lab
172.20.20.66 at-rke2-2 at-rke2-2.at.lab
172.20.20.67 at-rke2-3 at-rke2-3.at.lab
172.20.20.75 at-kubeadm kubeadm.at.lab
172.20.20.76 at-kubeadm-2
# exec on kubeadm cluster nodes
nano /etc/hosts
---
172.20.20.65 at-rke2-1 at-rke2-1.at.lab
172.20.20.66 at-rke2-2 at-rke2-2.at.lab
172.20.20.67 at-rke2-3 at-rke2-3.at.lab rancher.at.lab
172.20.20.75 at-kubeadm kubeadm.at.lab
172.20.20.76 at-kubeadm-2
- Import existing cluster
- Set name and create import generic cluster
- Copy registration command
- Run registration command in kubeadm cluster
# exec on kubeadm master node (at-kubeadm)
curl --insecure -sfL https://rancher.at.lab/v3/import/dsxdbngbwfvj75k6nnlh2zf6jqtp7wl66d9gvv5w7qrhtq7tv4jc5s_c-fjlfv.yaml | kubectl apply -f -
# when error no resolve domain
kubectl edit deployment cattle-cluster-agent -n cattle-system
---
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
by default replica for the cattle-cluster-agent deployment is 2. so if you just have 1 control-plane/master node, cattle-cluster-agent will trying run 2 pod in that node because affinity, that’s why 1 pod CLBO because conflict port. but you can delete CLBO pod to running in other node (worker)
- Verification
- Install and configure kubectx for centralize cluster management
# exec on rke2 master node (at-rke2-1)
git clone https://github.com/ahmetb/kubectx /opt/kubectx
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
access https://rancher.at.lab/dashboard/home -> Manage -> click option/triple dot -> Copy KubeConfig to Clipboard
# exec on rke2 master node (at-rke2-1)
nano /root/.kube/config
---
apiVersion: v1
kind: Config
clusters:
- name: "local"
cluster:
server: "https://rancher.at.lab/k8s/clusters/local"
certificate-authority-data: "[REDACTED]"
- name: "kubeadm-cluster"
cluster:
server: "https://rancher.at.lab/k8s/clusters/c-fjlfv"
certificate-authority-data: "[REDACTED]"
users:
- name: "local"
user:
token: "[REDACTED]"
- name: "kubeadm-cluster"
user:
token: "[REDACTED]"
contexts:
- name: "rke2-cluster"
context:
user: "local"
cluster: "local"
- name: "kubeadm-cluster"
context:
user: "kubeadm-cluster"
cluster: "kubeadm-cluster"
current-context: "rke2-cluster"
# exec on rke2 master node (at-rke2-1)
nano ~/.bashrc
---
#export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
export KUBECONFIG=/root/.kube/config
source ~/.bashrc
# mapping domain to ingress ip because kubectx connect cluster via external/ingress
nano /etc/hosts
---
172.20.20.65 #rancher.at.lab
172.20.20.67 rancher.at.lab
- Switch cluster with kubectx
# exec on rke2 master node (at-rke2-1)
kubectx
kubectx kubeadm-cluster
kubectl get nodes -o wide
- Operational test
# exec on rke2 master node (at-rke2-1)
kubectl create deployment nginx-import --image=nginx --replicas=1
kubectl expose deployment nginx-import --port=80 --target-port=80
kubectl create ingress nginx-import-ingress --class=nginx --rule="nginx-import.at.lab/*=nginx-import:80"
nano /etc/hosts
---
172.20.20.75 nginx-import.at.lab
Thank You.
Subscribe to my newsletter
Read articles from Muhammad Alfian Tirta Kusuma directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
