Cluster your service with the ConfigurationAdmin and Apache Karaf Cellar using the camunda BPM engine as example
Introduction
Initially, this was supposed to be a short introduction about the topic in the title and an opportunity for me to get to know Apache Karaf Cellar. Unfortunately, I couldn't finish the topic until today because I had some unexpected problems. So basically this is going to be a post about the problems I encountered. At the end you'll find a TL;DR; if you just want to get started.
Short introduction into the Configuration Admin Service
From the OSGi wiki: "Configuration Admin is a service which allows configuration information to be passed into components in order to initialise them, without having a dependency on where or how that configuration information is stored."(http://wiki.osgi.org/wiki/Configuration_Admin)
Basically you write a key-value property and a service which can use it. All the "magic" is done by the ConfigurationAdminService, which is part of the OSGi Compendium Specification. A good introduction can be found here. Also the Admin will store it somewhere for you.
Short introduction into Apache Karaf Cellar
Taken from the Cellar website: "Cellar is a clustering solution for Apache Karaf powered by Hazelcast. Cellar allows you to manage a cluster of Karaf instances, providing synchronisation between instances."(http://karaf.apache.org/index/subprojects/cellar.html)
I liked the idea to provide a service on one Karaf instance and see it appear on every instance in the cluster. Especially the combination with a MangedServiceFactory seems like a great idea.
To read more about Cellar see here.
Set up your Apache Karaf
For my example I want to use my MangedProcessEngineFactory from the 1.1.0-SNAPSHOT version of camunda BPM OSGi. You can just clone the repository on GitHub and built it with mvn install
.
Because I am quite lazy I started two Karaf instances on my laptop. If you want to do that, too, you'll have to change some port numbers for the second Karaf instance. First, the ports in the etc/org.apache.karaf.management.cfg
:
rmiRegistryPort
rmiServerPort
Second the SSH port in the etc/org.apache.karaf.shell.cfg
(forgetting this caused me a some trouble). Next we gotta install Cellar on each Karaf instance. Because we want to use the current version, we'll use version 3.0.1 of Cellar. You can find the general installation guide here for instructions about installation and start. Basically you just have to call from the Karaf console
feature:repo-add mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.1/xml/features
feature:install cellar
If you somehow plan to build Cellar yourself, I'll recommend to comment out the "samples" module in the root POM. All your Karaf instances should discover each other automatically. Now we got to install and share the camunda-feature (or whichever you want to use) into the cluster.
Install and share a feature
To do this task we have two choices. One would be to activate the listeners in every Karaf instance and use the "basic" commands. Therefore you'll have to set the bundle listener value in the org.apache.karaf.cellar.node.cfg
to true (we won't need the other ones in this example):
bundle.listener = true
config.listener = false
feature.listener = false
The other choice would be to use the cluster:*
commands. Both will (should) produce the same result so choose whichever you prefer.
As I mentioned, if you prefer the first option (listeners), you can just install everything as usual because the cluster synchronizes every change:
feature:repo-add mvn:org.camunda.bpm.extension.osgi/camunda-bpm-karaf-feature/1.1.0-SNAPSHOT/xml/features
feature:install camunda-bpm-karaf-feature-minimal
(Please note that you'll need my example project installed locally to use it)
(Also please note that there is currently a bug in the camunda feature.xml. You'll have to change the version of camunda-connect-core to 1.0.0-alpha3 to make it work)
If you want to use the "cluster-versions" of those commands, you have to type:
cluster:feature-repo-add default mvn:org.camunda.bpm.extension.osgi/camunda-bpm-karaf-feature/1.1.0-SNAPSHOT/xml/features
cluster:feature-install default camunda-bpm-karaf-feature-minimal
Those commands work like the basic ones but you always have to provide a group.
You should see that the feature got installed on both Karaf instances (check e.g. with features:list | grep -i camunda). Now we need a database.
Setting up the database
I gotta admit, this is were my first problems occurred. Starting from funny and ending at a being a little bit annoyed. My first problem was that I tried to use the in-memory version of H2. This won't work because, logically, every Karaf instance runs in its own JVM. So, because of multiple applications, I started h2 in server mode (see here for more information).
java -cp h2*.jar org.h2.tools.Server jdbc:h2:tcp://localhost/~/test
The next problem was that because of some exceptions the ProcessEngines started and stopped in seemingly random orders. Having the databaseSchemaUpdate
property set to create-drop caused problems with tables not being present because of random dropping/creating. I recommend to create the tables yourself (here are the sqls).
This didn't solve all of my database problems. I suspected H2 of not being capable of handling the same user logging in twice (which it is capable of as far as I know now). After that I switched to MySQL.
Setting up MySQL in Karaf
MySQL is a little bit more complicated to set up than H2 because we have to create a proper datasource. First, we need to install Apache Karaf DataSources:
feature:install jdbc
Next, create the datasource
jdbc:create -u sa -p sa -url jdbc:mysql://localhost:3306/test -t MySQL test
The datasource create command has to be executed on both Karafs because the datasource-*.xml that'll be created in the deploy directory won't be copied. For the ProcessEngine to be able to find the MySQL datasource it needs a JNDI name. To give a datasource a JNDI name we need Apache Karaf Naming.
feature:install jndi
Now the datasource will automatically get a JNDI name (check with jndi:names
). If you don't see the jndi:*
commands you'll have to install the feature manually on the second Karaf.
Finally we need the MySQL connector jar. We can find it here. Simply drop the jar into the deploy directory.
The MySQL database works fine for me so far. Let's take a look at the configuration file.
The configuration file
When I started with this "experiment" I thought that making the use of the etc/ directory in Karaf would be a good idea but now I gotta say: Please, don't try to do this file based. I tried a lot of combinations and it didn't work out. The closest I got was the configuration arriving on both Karafs but only one engine being created. Jean-Baptiste and Achim were really trying to help me on the mailing list. Nevertheless, I couldn't get it running. You are free to try.
Karaf watches the etc/ directory for configuration files. To deploy one for the ManagedProcessEngineFactroy you'll have to name it org.camunda.bpm.extension.osgi.configadmin.ManagedProcessEngineFactory-1.cfg
.
I switched to a bundle which contains the configuration.
The configuration bundle
As mentioned before, for a ManagedServiceFactory to create a service it needs one or more configurations. We'll use a simple version of the configuration:
databaseSchemaUpdate=false
jobExecutorActivate=true
processEngineName=TestEngine
databaseType=mysql
dataSourceJndiName=osgi:service/jdbc/test
If you want to try H2, the configuration would look like this:
databaseSchemaUpdate=false
jdbcUrl=jdbc:h2:tcp://localhost/~/test
jobExecutorActivate=true
processEngineName=TestEngine
jdbcUsername=sa
jdbcPassword=sa
To make it simple the bundle just uses a BundleActivator
, gets hold of the Configuration Admin and provides the property, like this:
public class Activator implements BundleActivator {
public void start(BundleContext context) throws Exception {
ServiceReference ref = context.getServiceReference(ConfigurationAdmin.class.getName());
ConfigurationAdmin admin = (ConfigurationAdmin) context.getService(ref);
String pid = "org.camunda.bpm.extension.osgi.configadmin.ManagedProcessEngineFactory";
Configuration configuration = admin.createFactoryConfiguration(pid, null);
Hashtable properties = new Hashtable();
properties.put("databaseSchemaUpdate","false");
properties.put("jobExecutorActivate","true");
properties.put("processEngineName","TestEngine");
properties.put("databaseType","mysql");
properties.put("dataSourceJndiName", "osgi:service/jdbc/test");
configuration.update(properties);
}
The activated bundle listener should provide the bundle to all Karafs. Just drop the bundle into the deploy directory.
You should see that the configuration got shared, too. To check just run this command: config:list "(service.pid=org.camunda.bpm.extension.osgi.configadmin.ManagedProcessEngineFactory*)"
TL;DR;
- change port numbers in
etc/org.apache.karaf.management.cfg
andetc/org.apache.karaf.shell.cfg
if you run two instances on one machine feature:repo-add mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.1/xml/features
feature:install cellar
- Decide if you want to activate the listener or use the cluster:commands for the following things
git clone https://github.com/camunda/camunda-bpm-platform-osgi.git
- mvn install the project
feature:repo-add mvn:org.camunda.bpm.extension.osgi/camunda-bpm-karaf-feature/1.1.0-SNAPSHOT/xml/features
feature:install camunda-bpm-karaf-feature-minimal
- set up MySQL databse
feature:install jdbc
- drop MySQL connector jar into deploy directory
jdbc:create -u sa -p sa -url jdbc:mysql://localhost:3306/test -t MySQL test
feature:install jndi
- create configuration bundle and drop it into deploy directory. Configuration:
databaseSchemaUpdate=false
jobExecutorActivate=true
processEngineName=TestEngine
databaseType=mysql
dataSourceJndiName=osgi:service/jdbc/test
And you're good to go.
So, this was my trip into the Karaf Cellar world. I hope I could prove the feasibility to you. I'll leave the practical consequences as an exercise to the reader ;-)
Subscribe to my newsletter
Read articles from Ronny Bräunlich directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by