MicroCeph is the easy way to Ceph

Table of contents

Introduction
This is part of the Distributed Storage series. We assume you have a Kubernetes cluster running as described in Kubernetes With Microk8s. Using the same virtual machines, we will deploy a Ceph cluster and enable different types of storage: file, object, and block (default).
Prerequisites
It is worth repeating that each VM has a disk which can be consumed by Ceph:
$ lsblk | grep -v loop
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 2.7G 0 rom
nvme0n1 259:0 0 80G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
├─nvme0n1p2 259:2 0 2G 0 part /boot
└─nvme0n1p3 259:3 0 76.9G 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 38.5G 0 lvm /
nvme0n2 259:4 0 20G 0 disk
Bootstrap Ceph Cluster
Let us install microceph.
You can purge any existing installation, or restart installation in case something goes wrong later.
# This will remove any existing installaion
sudo snap remove microceph --purge
The following bootstraps a new cluster:
manas@manas-s01:~$ sudo snap install microceph --channel=stable
microceph (squid/stable) 19.2.0+snap3b53da1c21 from Canonical✓ installed
manas@manas-s01:~$ sudo microceph cluster bootstrap
manas@manas-s01:~$ sudo microceph status
MicroCeph deployment summary:
- manas-s01 (192.168.148.134)
Services: mds, mgr, mon
Disks: 0
We now have a single-node cluster. Adding other nodes is a bit different from microk8s. The cluster add command generates a unique token that includes the hostname. If the cluster join fails, ensure you have used the correct hostname. For example, the VMs in this demo have hostname manas-s01, manas-s02 and mans-s03. Naming things is hard.
# Generate token to add nodes
$ sudo microceph cluster add <hostname>
<token>
# For each node
$ sudo microceph cluster join <token>
# Master Node
manas@manas-s01:~$ sudo microceph status
MicroCeph deployment summary:
- manas-s01 (192.168.148.135)
Services: mds, mgr, mon
Disks: 0
- manas-s02 (192.168.148.136)
Services: mds, mgr, mon
Disks: 0
- manas-s03 (192.168.148.137)
Services: mds, mgr, mon
Disks: 0
Next, we will claim storage to be consumed by the cluster. Remember, we kept an extra disk for this purpose. Repeat the following for each node:
# Repeat for each node
manas@manas-s01:~$ sudo microceph disk add /dev/nvme0n2 --wipe
+--------------+---------+
| PATH | STATUS |
+--------------+---------+
| /dev/nvme0n2 | Success |
+--------------+---------+
# Status shows that we have 3 OSDs
manas@manas-s01:~$ sudo microceph status
MicroCeph deployment summary:
- manas-s01 (192.168.148.134)
Services: mds, mgr, mon, osd
Disks: 1
- manas-s02 (192.168.148.136)
Services: mds, mgr, mon, osd
Disks: 1
- manas-s03 (192.168.148.135)
Services: mds, mgr, mon, osd
Disks: 1
Check the cluster and disk status:
manas@manas-s01:~$ sudo microceph.ceph status
cluster:
id: e43d58a8-deb0-43d9-b7ef-d6159f114c02
health: HEALTH_OK
services:
mon: 3 daemons, quorum manas-s01,manas-s02,manas-s03 (age 5m)
mgr: manas-s01(active, since 52m), standbys: manas-s02, manas-s03
osd: 3 osds: 3 up (since 101s), 3 in (since 108s)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 577 KiB
usage: 81 MiB used, 60 GiB / 60 GiB avail
pgs: 1 active+clean
manas@manas-s01:~$ sudo microceph disk list
Disks configured in MicroCeph:
+-----+-----------+-----------------------------------------------------------+
| OSD | LOCATION | PATH |
+-----+-----------+-----------------------------------------------------------+
| 1 | manas-s01 | /dev/disk/by-id/nvme-eui.3e9ca0d1c76f942a000c296b819ff947 |
+-----+-----------+-----------------------------------------------------------+
| 2 | manas-s02 | /dev/disk/by-id/nvme-eui.3e9ca0d1c76f942a000c296b819ff947 |
+-----+-----------+-----------------------------------------------------------+
| 3 | manas-s03 | /dev/disk/by-id/nvme-eui.3e9ca0d1c76f942a000c296b819ff947 |
+-----+-----------+-----------------------------------------------------------+
Now, to consume File and Object storage, we need to enable the relevant services.
For example, we enable rgw
for object storage on all the nodes
$ sudo microceph enable rgw
$ sudo microceph status
MicroCeph deployment summary:
- manas-s01 (192.168.148.134)
Services: mds, mgr, mon, rgw, osd
Disks: 1
- manas-s02 (192.168.148.136)
Services: mds, mgr, mon, rgw, osd
Disks: 1
- manas-s03 (192.168.148.135)
Services: mds, mgr, mon, rgw, osd
Disks: 1
Congratulations, your multi node Ceph cluster is ready!
Note that microceph
commands are different from the ceph
CLI
Reference docs for the Squid Release of Ceph:
Subscribe to my newsletter
Read articles from Manas Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Manas Singh
Manas Singh
14+ Years in Enterprise Storage & Virtualization | Python Test Automation | Leading Quality Engineering at Scale