A Raspberry Pi Apache Druid® cluster? WELL WHY NOT!
The objective? Hilarity. But also a pure, unadulterated hypercool springboard for learning everything there is to know about Druid without spending tonnes of cash!
3 x Data Nodes, using Raspberry Pi 4s with 8GB RAM and a fast 256 GB memory card
1 x Query Node, using a Raspberry Pi 4 with 4GB RAM and 64GB memory card
1 x Master Nodes using a Raspberry Pi 4 with 4GB RAM and 64GB memory card
1 x Object Store Service running on an Ubuntu Virtual Machine running Min.IO 3
1 x Relational Database Service running MySql on a Windows 10 very old, very hot desktop
In a previous incarnation which ran perfectly well, were 2 x data nodes on Raspberry Pi 4s with 4GB RAM, and the Master processes ran in the same VM as the Object Store. It functioned perfectly, and ingested the entire Chicago Taxi Dataset in ~3 hours without me particularly trying to optimise anything… The version we’ll build in this “how to” is a bit posher because I saved my Beer Tokens and bought new Pis instead!
Everything will use default ports, and we won’t lock this down with encryption or authentication. So don’t put this into production. Well, I mean… would you?!
Dex Ezekiel on Unsplash
These are beautiful machines, built for learning. This cluster has turned out to be just that: a super helpful place to get into the real intricacies of Apache Druid without breaking the bank!
Bringing up Dependencies
First stop, we have to get some services running that Druid is going to use.
Metadata Database
After installing MySQL on your chosen node, see MySQL Metadata Store · Apache Druid to create the user and the Apache Druid schema.
I used an old Windows dust collector. I suspect MySql could be run on a Raspberry Pi as well - I only didn’t because I was running out of space on my desk!
The host name of the user when creating the MySql user in the way the docs suggest limits connections to “localhost” - and the authentication type needs to be “Standard” when using username / password (as we do below). If you create the user in a different mode, here’s a post on how to correct it: java - How to resolve Unable to load authentication plugin 'caching_sha2_password' issue - Stack Overflow
You might also need to use a slightly different GRANT command:
GRANT ALL PRIVILEGES ON druid.* TO 'druid';
Deep Storage
Option 1: Minio on Raspberry Pi
You can follow this guide to set up Min.io on a Raspberry Pi, complete with external disk.
I didn’t set up SSL, but apart from that the only thing I did differently was to chmod the two downloaded libraries as that’s not in the post:
chmod +x minio
* chmod +x mc
FYI at one point I had to resize the disk - I found these great guides.
Suggestion: don’t knock the external drive off your desk and break it. But if you do…
Option 2: Minio on VirtualBox
Create an Ubuntu VM and, adapting the process above, install Min.io 1. Ie,
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio-
sudo ln -s /home/pi/minio /usr/bin/minio
Create a start.sh script like you would for the Pi install:
#!/bin/bash
export MINIO_ACCESS_KEY=SuperSecretAccessKey
export MINIO_SECRET_KEY=SuperSecretSecretKey
export MINIO_DOMAIN=[FQDN]
export MINIO_DISK_USAGE_CRAWL=off
minio server ~/data
And finally:
chmod +x
start.sh
You may then opt to register with serviced
as per the article.
Confirm that Druid will be able to access Min.io by logging in to the Minio web page (port 9000) with the user you’re going to assign for Druid.
Check that Druid can connect to the database by attempting a connection to MySql from an appropriate machine using the username and password you will use for your Druid connection.
Zookeeper
For this simple deployment, we will start and use Zookeeper as part of the Master Pi.
Create a Local Repo
To make pushing configurations to the machines simpler, let’s download the Apache Druid distribution locally. Then we can set our configuration files there and propagate them to the Pis using rsync
.
From Druid | Download, click download and copy a mirror Url to the clipboard.
Now in a terminal window on your machine let’s create that local “repository”:
mkdir apps
cd apps
curl -O [tar.gz url]
tar -xzf [tar.gz filename]
You don’t gotta create and use an
apps
directory, of course. That’s just me. And you can usewget
if you’re not on a Mac, for example.
Configure Connections to Dependencies
Now let’s follow Clustered deployment · Apache Druid to set everything up in the configuration files that we have locally.
I used Sublime Text because I could open the folder (instead of individual files) and then just navigate the tree. I also dabbled with Atom which does the same job.
Get MySql Database Jars
First of all, we have to add the necessary connectors for MySql. As noted in the docs, the MySql Connection Jar files are not included with the Apache Druid distribution.
Go to https://mvnrepository.com/artifact/mysql/mysql-connector-java to find the Buildr tag for the “5.1” Java connector.
Copy the text in apostrophes (as highlighted above).
Open a terminal window and go into the Druid directory inside your local “repo”.
Use the command below, using the text you just copied in the -c switch of this command:
sudo java -cp "lib/*" -Ddruid.extensions.directory="extensions" -Ddruid.extensions.hadoopDependenciesDir="hadoop-dependencies" org.apache.druid.cli.Main tools pull-deps --no-default-hadoop -c [paste here]
JDK needs to be installed to run this command. See https://www.oracle.com/java/technologies/javase-jdk15-downloads.html to find out how to install it for your platform.
As noted in the docs, don’t forget to copy the newly downloaded files into the
mysql-metadata-storage
extensions folder. E.g.
cd extensions cp mysql-connector-java/* mysql-metadata-storage/.
Set MySql Metadata Database Configuration
Now I went to ‘conf/druid/cluster/_common’ and opened the ‘common.runtime.properties’ file.
Following the docs, I provided my connection details.
Set Min.IO Deep Storage Configuration
I used Min.IO for both the Deep Storage “segments” and the ingestion task logs.
Connect to Min.IO & Set Segments Bucket
Thanks to the awesome Druid community, you can follow the guidance from Hagen Rother on how to “cheat” the S3 extension to work with Min.IO.
Now I didn’t test if this was strictly necessary, but this great article 1 says that we need to set the jets3t properties file in the class path. So I created mine as per the article as follows. I don’t know if I needed to do this though.
s3service.s3-endpoint=druid-master-0.lan s3service.s3-endpoint-http-port=9000 s3service.disable-dns-buckets=true s3service.https-only=false
Min.IO for Ingestion Logs
Let’s set up indexing logs to go to Min.IO.
People often forget or ignore this step. Don’t! With this set up, I can see the logs in one place - much easier than hunting around in shell.
Zookeeper
Finally, set the zookeeper connection as per the docs:
Druid Configuration
Having set up the common.runtime.properties
, now tune the configuration for each process.
Pis are tiny machines. The configuration here is not optimised - rather it’s about getting the service up and running. Tuning will be for another day!
First, ensure the druid.host
line has been commented out so that each node uses the local machine name when it advertises itself as part of the cluster.
Data
Historical
Browse to the conf/druid/cluster/data/historical
configuration files.
Open jvm.config
and reduce the Java Heap and direct memory.
-Xms2g
-Xmx2g
-XX:MaxDirectMemorySize=500m
This configuration leaves 5GB for the Middle Manager and its tasks. My goal for this - as with everything Raspberry Pi (!) - is to create a learning environment. We know that a Middle Manager Peon, which will also run on this node, requires 1GB of RAM at the outset. If we look purely at the memory, we’d think that this allows us to run six workers. But this exceeds the number of cores! So to be safe, I turned this down to 4 workers, and will come back to providing more memory on the peons later… after coffee…
Now open the runtime.properties
.
Reduced the numThreads
, the processing buffer, and the merge buffers and threads.
druid.server.http.numThreads=15
druid.processing.buffer.sizeBytes=100MiB
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
Next, set the local segment cache size to the amount of space we want to use on the memory cards:
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize":"200g"}]
Finally, I turned off results caching. That’s purely because I want to play around with all that later.
druid.cache.sizeInBytes=0
Middle Manager and Middle Manager Peons (Tasks)
Browse to conf/druid/cluster/data/middleManager
.
The Middle Manager’s JVM properties can be left at the default 128Mb.
Open the runtime.properties
file.
Locate druid.indexer.runner.javaOpts
This config line sets the JVM heap and direct memory size for each Middle Manager Peon (aka Task) - the more memory each of these consumes, the fewer workers we can run inside the memory of a Pi. Conversely, the less memory each one has the more workers we can have but - critically - the more likely they are to OOM in the face of big data.
Amend the configuration of each task to reduce to 500m for heap and 500m for direct memory.
druid.indexer.runner.javaOpts=-server -Xms500m -Xmx500m -XX:MaxDirectMemorySize=500m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
Finally, set the number of workers available on this node:
druid.worker.capacity=4
I skipped the specific section in the tuning guide on connection pooling. Again - that’ll be for play time.
Master
Now navigate to conf/druid/cluster/master/coordinator-overlord
.
The docs call for a heap size that is kinda close to the Broker which will run on our query node.
“4G to 8G is a good starting point for a small or medium cluster (~15 servers or less)”
Because this is Pi land, I went into jvm.config
and reduce the heap to 2GB.
Query
Broker
Now browse to conf/druid/cluster/query/broker
.
Open the jvm.config
file.
Reduce the heap to 2GB (down from 12 in the defaults) and Direct Memory to 1GB (down from 6).
Router
We will leave the router with the default heap of 1GB and direct memory of 128MB.
Operating System Setup
We start by flashing our memory cards for all the Pis. Follow Ubuntu’s own published process to flash the memory cards with Ubuntu. For us, that means doing this five times, one for each Pi.
So long as it runs Java, you can use any Linux incarnation you like. I went with the latest Ubuntu Server LTS because it sounded nice.
Follow the Ubuntu process to set up networking.
Take care when setting up wifi… I got it wrong so many times…
If you forget, you could use nmcli to set up wifi if you have a handy ethernet connection:
sudo apt install network-manager sudo nmcli d wifi connect <yourWirelessNetwork> password <yourNetworkKey>
Once you have a working
network-config
file, I suggest copying it out to your desktop and then you can copy it back to all the other cards. #efficiency
Next, you’ll need to boot your machine and log in, as per the guide. At this point, change the hostname.
sudo nano /etc/hostname
sudo reboot 0
It’s a good idea to use standard naming, like below, for your hostnames:
Node Type | Name Pattern |
Data Server | druid-data-n |
Query Server | druid-query-0 |
Master (Coordinator / Overlord) | druid-master-n |
Object Store | minio-0 |
Sometimes, I wasn’t able to connect over SSH as given in the Ubuntu post - even if I waited for cloud-init to complete. But no matter: I plugged in a keyboard and connected an HDMI monitor. Nice.
Use something like Termius to do your OS work. You can create groups, cache credentials, and copy and paste commands easily.
Keep a HDMI cable and monitor handy. When things happened I didn’t expect, I could quickly get straight onto the Pi and see what the hell was going on directly.
At this point my desk got kinda messy…!!!
Now, on each Pi all connected to the LAN with SSH working great, let’s update as you always should!
sudo apt update && sudo apt upgrade
sudo reboot 0
My VM is running on an old Windows 10 box with a cute quad core processor and 4GB RAM. It used to be my pride and joy. Now, really only a few years later, these credit car-sized machines are noticeably faster…
Operating system dependencies
We must install Java and Python:
sudo apt install openjdk-8-jre-headless && sudo apt install python
Check that the correct versions are being used by your user::
java -version
python --version
You will see something like:
Logout, but leave the Pi running,
Distribute binaries and configs
We can use rsync
to push the distribution to the various home folders on each Pi. This makes pushing out changes super-easy.
Create a shell script in your apps
directory that you can execute - adapt the following to the appropriate hostnames.
rsync -azq apache-druid-0.20.0 ubuntu@druid-master-0.lan:/home/ubuntu
rsync -azq apache-druid-0.20.0 ubuntu@druid-data-0.lan:/home/ubuntu
rsync -azq apache-druid-0.20.0 ubuntu@druid-data-1.lan:/home/ubuntu
rsync -azq apache-druid-0.20.0 ubuntu@druid-data-2.lan:/home/ubuntu
rsync -azq apache-druid-0.20.0 ubuntu@druid-query-0.lan:/home/ubuntu
Use chmod
to make your .sh executable.
With all your Pis up and running, execute your script.
Check that this is working by monitoring the rsync progress on each node: you will see the directory is created in the target location:
Start Druid!
Master first
On the Master, let’s start Druid with Zookeeper:
Check that this is working by monitoring the druid schema in MySql:
Now the query node...
Dead simple again, ssh in, change to your distro folder, then run bin/start-cluster-query-server
.
Check that this is working by navigating to port 8888 on the query node. You should see no errors that might indicate an issue with Zookeeper communication.
Then switch to the Services view and check that all four services are now registered.
And. nowthe data...
Finally, let’s bring up the Data nodes. Dead simple again - this time into the distro folder on all our target data Pis, and we run bin/start-cluster-data
.
Check that the ZK communication is working well by monitoring the Services tab, watching for both the historical process and the middlemanager processes with those all-important worker slots.
Load something!
Now you can follow the quickstart guide and do an ingestion from Wikipedia!
Conclusion
You may have opened this thinking, well, that sounds insane. And you would kind of be right! But there are serious reasons for having done this insane thing in the first place!
Raspberry Pi are cheap. What better way to learn and play around with the various moving parts of an Apache Druid cluster than to use commodity hardware!
Raspberry Pi are cheap. Yes, we said that before! But it does also mean that, if you want to start simple (nano deployment) and then step up to a cluster (one of each node) and then later scale out and see what happens - you can! And without breaking the bank.
Raspberry Pi are physical things you can pick up. They are, unlike a cloud service, in front of you. There are flashing lights. They generate heat. They look cool.
Sometimes big data is too fast: it’s been wonderful having this set up to see exactly what is going on - to allow my tiny brain to consume every little moving part of Apache Druid. If you do have a go, PLEASE write!
Subscribe to my newsletter
Read articles from Peter Marshall directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by