Import Kubeadm Cluster to Rancher Manager

In this article I will show you how to import existing Kubeadm cluster to Rancher Manager. Rancher Manager providing a single web interface to monitor, manage, and operate all cluster. It gives you centralized dashboards, unified RBAC, application catalogs, and consistent monitoring across your entire Kubernetes infrastructure. All without changing how your cluster operates underneath. This is Prerequisites cluster for import to rancher manager.

So, let's get started…

Environment

  • RKE2 Cluster and Rancher Manager
Hostnameat-rke2-1 (master node)
Operating SystemUbuntu 22.04 (Jammy)
vCPU8 (too large for testing)
Memory12 GB (too large for testing)
Disk60 GB
Network172.20.20.65
Hostnameat-rke2-2 (worker node)
Operating SystemUbuntu 22.04 (Jammy)
vCPU4
Memory8 GB
Disk40 GB
Network172.20.20.66
Hostnameat-rke2-3 (ingress node)
Operating SystemUbuntu 22.04 (Jammy)
vCPU2
Memory4 GB
Disk30 GB
Network172.20.20.67
  • Kubeadm Cluster
Hostnameat-kubeadm (master node)
Operating SystemUbuntu 22.04 (Jammy)
vCPU4
Memory8 GB
Disk60 GB
Network172.20.20.75
Hostnameat-kubeadm-2 (worker node)
Operating SystemUbuntu 22.04 (Jammy)
vCPU2
Memory4 GB
Disk40 GB
Network172.20.20.76

Import Kubeadm Cluster to Rancher Manager

  1. Mapping hosts
# exec on rke2 cluster nodes
nano /etc/hosts
---
172.20.20.65 at-rke2-1 at-rke2-1.at.lab rancher.at.lab
172.20.20.66 at-rke2-2 at-rke2-2.at.lab
172.20.20.67 at-rke2-3 at-rke2-3.at.lab

172.20.20.75 at-kubeadm kubeadm.at.lab
172.20.20.76 at-kubeadm-2

# exec on kubeadm cluster nodes
nano /etc/hosts
---
172.20.20.65 at-rke2-1 at-rke2-1.at.lab
172.20.20.66 at-rke2-2 at-rke2-2.at.lab
172.20.20.67 at-rke2-3 at-rke2-3.at.lab rancher.at.lab

172.20.20.75 at-kubeadm kubeadm.at.lab
172.20.20.76 at-kubeadm-2
  1. Import existing cluster

  1. Set name and create import generic cluster

  1. Copy registration command

  1. Run registration command in kubeadm cluster
# exec on kubeadm master node (at-kubeadm)
curl --insecure -sfL https://rancher.at.lab/v3/import/dsxdbngbwfvj75k6nnlh2zf6jqtp7wl66d9gvv5w7qrhtq7tv4jc5s_c-fjlfv.yaml | kubectl apply -f -

# when error no resolve domain
kubectl edit deployment cattle-cluster-agent -n cattle-system
---
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true

by default replica for the cattle-cluster-agent deployment is 2. so if you just have 1 control-plane/master node, cattle-cluster-agent will trying run 2 pod in that node because affinity, that’s why 1 pod CLBO because conflict port. but you can delete CLBO pod to running in other node (worker)

  1. Verification

  1. Install and configure kubectx for centralize cluster management
# exec on rke2 master node (at-rke2-1)
git clone https://github.com/ahmetb/kubectx /opt/kubectx
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
access https://rancher.at.lab/dashboard/home -> Manage -> click option/triple dot -> Copy KubeConfig to Clipboard

# exec on rke2 master node (at-rke2-1)
nano /root/.kube/config
---
apiVersion: v1
kind: Config
clusters:
- name: "local"
  cluster:
    server: "https://rancher.at.lab/k8s/clusters/local"
    certificate-authority-data: "[REDACTED]"
- name: "kubeadm-cluster"
  cluster:
    server: "https://rancher.at.lab/k8s/clusters/c-fjlfv"
    certificate-authority-data: "[REDACTED]"

users:
- name: "local"
  user:
    token: "[REDACTED]"
- name: "kubeadm-cluster"
  user:
    token: "[REDACTED]"

contexts:
- name: "rke2-cluster"
  context:
    user: "local"
    cluster: "local"
- name: "kubeadm-cluster"
  context:
    user: "kubeadm-cluster"
    cluster: "kubeadm-cluster"

current-context: "rke2-cluster"
# exec on rke2 master node (at-rke2-1)
nano ~/.bashrc
---
#export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
export KUBECONFIG=/root/.kube/config

source ~/.bashrc

# mapping domain to ingress ip because kubectx connect cluster via external/ingress
nano /etc/hosts
---
172.20.20.65 #rancher.at.lab
172.20.20.67 rancher.at.lab
  1. Switch cluster with kubectx
# exec on rke2 master node (at-rke2-1)
kubectx
kubectx kubeadm-cluster
kubectl get nodes -o wide

  1. Operational test
# exec on rke2 master node (at-rke2-1)
kubectl create deployment nginx-import --image=nginx --replicas=1
kubectl expose deployment nginx-import --port=80 --target-port=80 
kubectl create ingress nginx-import-ingress --class=nginx --rule="nginx-import.at.lab/*=nginx-import:80"

nano /etc/hosts
---
172.20.20.75 nginx-import.at.lab

Thank You.

0
Subscribe to my newsletter

Read articles from Muhammad Alfian Tirta Kusuma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Muhammad Alfian Tirta Kusuma
Muhammad Alfian Tirta Kusuma