Cloud native storage with Rook


Introduction
This is part of the Distributed Storage series. We assume you have a Kubernetes cluster running as described in Kubernetes With Microk8s. Using the same virtual machines, we have deployed a Ceph cluster and enabled different types of storage: file, object, and block (default). This ensures that we have covered the prerequisites.
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments. The storage architecture is well documented. We focus on getting it up and running.
Enable Rook
First, we enable rook on the k8s cluster
manas@manas-s01:~$ sudo microk8s enable rook-ceph
Infer repository core for addon rook-ceph
Add Rook Helm repository <https://charts.rook.io/release>
"rook-release" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "rook-release" chart repository
Update Complete. ⎈Happy Helming!⎈
Install Rook version v1.11.9
NAME: rook-ceph
LAST DEPLOYED: Fri Jun 13 16:01:21 2025
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Rook Operator has been installed. Check its status by running:
kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"
...
Check Rook Status
manas@manas-s01:~$ microk8s kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-684bbd569f-82dv9 1/1 Running 0 30m
Connect Ceph and k8s cluster
manas@manas-s01:~$ sudo microk8s connect-external-ceph
[sudo] password for manas:
Looking for MicroCeph on the host
Detected existing MicroCeph installation
Attempting to connect to Ceph cluster
Successfully connected to e43d58a8-deb0-43d9-b7ef-d6159f114c02 (192.168.148.134:0/1580887393)
Creating pool microk8s-rbd0 in Ceph cluster
Configuring pool microk8s-rbd0 for RBD
Successfully configured pool microk8s-rbd0 for RBD
Creating namespace rook-ceph-external
namespace/rook-ceph-external created
Configuring Ceph CSI secrets
Successfully configured Ceph CSI secrets
Importing Ceph CSI secrets into MicroK8s
secret/rook-ceph-mon created
configmap/rook-ceph-mon-endpoints created
secret/rook-csi-rbd-node created
secret/rook-csi-rbd-provisioner created
storageclass.storage.k8s.io/ceph-rbd created
Importing external Ceph cluster
W0613 16:32:46.334927 129114 warnings.go:70] unknown field "spec.upgradeOSDRequiresHealthyPGs"
NAME: rook-ceph-external
LAST DEPLOYED: Fri Jun 13 16:32:45 2025
NAMESPACE: rook-ceph-external
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Ceph Cluster has been installed. Check its status by running:
kubectl --namespace rook-ceph-external get cephcluster
Visit <https://rook.io/docs/rook/latest/CRDs/Cluster/ceph-cluster-crd/> for more information about the Ceph CRD.
Important Notes:
- You can only deploy a single cluster per namespace
- If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk`
=================================================
Successfully imported external Ceph cluster. You can now use the following storageclass
to provision PersistentVolumes using Ceph CSI:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 2s
Check Ceph cluster status
# Initial
manas@manas-s01:~$ microk8s kubectl --namespace rook-ceph-external get cephcluster
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID
rook-ceph-external /var/lib/rook 3 67s true
# Final
manas@manas-s01:~$ microk8s kubectl --namespace rook-ceph-external get cephcluster
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID
rook-ceph-external /var/lib/rook 3 6m1s Connected Cluster connected successfully HEALTH_OK true e43d58a8-deb0-43d9-b7ef-d6159f114c02
ma
Wait until all k8s resources are in Running status:
manas@manas-s01:~$ microk8s kubectl get all --namespace rook-ceph
NAME READY STATUS RESTARTS AGE
pod/csi-cephfsplugin-chz2w 2/2 Running 0 5m7s
pod/csi-cephfsplugin-n2wxd 2/2 Running 0 5m7s
pod/csi-cephfsplugin-provisioner-7bd8fb7c64-fbmw6 5/5 Running 0 5m7s
pod/csi-cephfsplugin-provisioner-7bd8fb7c64-tbqlp 5/5 Running 0 5m7s
pod/csi-cephfsplugin-zpltk 2/2 Running 0 5m7s
pod/csi-rbdplugin-7dgxz 2/2 Running 0 5m7s
pod/csi-rbdplugin-8x5x7 2/2 Running 0 5m7s
pod/csi-rbdplugin-provisioner-5f7d95b6fb-s4znf 5/5 Running 0 5m7s
pod/csi-rbdplugin-provisioner-5f7d95b6fb-wjk7q 5/5 Running 0 5m7s
pod/csi-rbdplugin-zccc7 2/2 Running 0 5m7s
pod/rook-ceph-operator-684bbd569f-82dv9 1/1 Running 0 39m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/csi-cephfsplugin 3 3 3 3 3 <none> 5m7s
daemonset.apps/csi-rbdplugin 3 3 3 3 3 <none> 5m8s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/csi-cephfsplugin-provisioner 2/2 2 2 5m7s
deployment.apps/csi-rbdplugin-provisioner 2/2 2 2 5m7s
deployment.apps/rook-ceph-operator 1/1 1 1 39m
NAME DESIRED CURRENT READY AGE
replicaset.apps/csi-cephfsplugin-provisioner-7bd8fb7c64 2 2 2 5m7s
replicaset.apps/csi-rbdplugin-provisioner-5f7d95b6fb 2 2 2 5m7s
replicaset.apps/rook-ceph-operator-684bbd569f 1 1 1 39m
manas@manas-s01:~$
Meanwhile, If you want to explore the possible API Resources, you the following command:
manas@manas-s01:~$ microk8s kubectl api-resources --namespace rook-ceph-external | grep ceph
cephblockpoolradosnamespaces ceph.rook.io/v1 true CephBlockPoolRadosNamespace
cephblockpools ceph.rook.io/v1 true CephBlockPool
cephbucketnotifications ceph.rook.io/v1 true CephBucketNotification
cephbuckettopics ceph.rook.io/v1 true CephBucketTopic
cephclients ceph.rook.io/v1 true CephClient
cephclusters ceph.rook.io/v1 true CephCluster
cephfilesystemmirrors ceph.rook.io/v1 true CephFilesystemMirror
cephfilesystems ceph.rook.io/v1 true CephFilesystem
cephfilesystemsubvolumegroups ceph.rook.io/v1 true CephFilesystemSubVolumeGroup
cephnfses nfs ceph.rook.io/v1 true CephNFS
cephobjectrealms ceph.rook.io/v1 true CephObjectRealm
cephobjectstores ceph.rook.io/v1 true CephObjectStore
cephobjectstoreusers rcou,objectuser ceph.rook.io/v1 true CephObjectStoreUser
cephobjectzonegroups ceph.rook.io/v1 true CephObjectZoneGroup
cephobjectzones ceph.rook.io/v1 true CephObjectZone
cephrbdmirrors ceph.rook.io/v1 true CephRBDMirror
To see Block storage status, you can use cephblockpools
. Similarly, cephobjectstores
is for Object and cephnfses
is for NFS.
Refer to examples from https://github.com/rook/rook/tree/master/deploy/examples
Create Block Storage
$ microk8s kubectl create -f storageClass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created
$ microk8s kubectl -n rook-ceph get cephblockpools
NAME PHASE
replicapool Progressing
$ microk8s kubectl create -f mysql.yaml
service/wordpress-mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/wordpress-mysql created
Create Object Stores
This requires rgw
that we have already enabled
$ kubectl create -f object.yaml
$ microk8s kubectl -n rook-ceph get cephobjectstores
NAME PHASE
my-store Progressing
Create Share File Systems
$ kubectl create -f file.yaml
$ microk8s kubectl get cephfilesystems -n rook-ceph
NAME ACTIVEMDS AGE PHASE
myfs 1 5m38s Progressing
That’s it! We have completed deploying a cloud native storage solution, that too, locally.
Subscribe to my newsletter
Read articles from Manas Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Manas Singh
Manas Singh
14+ Years in Enterprise Storage & Virtualization | Python Test Automation | Leading Quality Engineering at Scale