Apache Camel and JBoss Data Grid Integration

Apache Camel is a framework to develop integration solutions. JBoss Data Grid is a distributable key/value cache. Both solutions are part of Red Hat middleware portfolio. JBoss Data Grid as a standalone product and Apache Camel as part of JBoss Fuse integration solution.

The integration between both products is through a supported Apache Camel component. The name of the component is camel-jbossdatagrid.

Here is a example of how to use this component to connect to a remote JBoss Data Grid instance(s). First, import the component dependency in your pom.xml.


This dependency isn’t in the default Maven repository. You must configure the Red Hat Maven repositories. You can go to this url to see how it’s done: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/development_guide/#use_the_maven_repository

Now you must develop a factory method that instantiate a RemoteCacheManager. This class is part of Hot Rod API; Hot Rod is the default API to connect to remote JBoss Data Grid instances.

package com.angelogalvao.datagrid.example;

import java.util.Map;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;

public class RemoteCacheManagerFactory {

   private RemoteCacheManager cacheManager; 
   private String[] hosts; 

   public RemoteCacheManagerFactory(String hosts) { 
      if( hosts == null ) 
         throw new IllegalArgumentException("Hosts is null"); 

      this.hosts = hosts.split(";"); 

   public RemoteCacheManager getRemoteCacheManager() { 
      if(cacheManager != null) 
         return cacheManager; 

      // Create the RemoteCacheManager 
      ConfigurationBuilder configurationBuilder = new ConfigurationBuilder(); 

      for (int i = 0; i < hosts.length; i++) { 
         String host = hosts[i]; 
         String[] hostConfig = host.split(":"); 

      cacheManager = new RemoteCacheManager(configurationBuilder.build()); 
      return cacheManager; 

Configure the factory component in Spring/Blueprint context.

<bean class="com.angelogalvao.datagrid.example.RemoteCacheManagerFactory" id="remoteCacheManagerFactory">
    <argument value="localhost:11222"/>

<bean id="cacheManager" factory-ref="remoteCacheManagerFactory" factory-method="getRemoteCacheManager"/>

Finally, create the Apache Camel routes to access the JBoss Data Grid. The example above have 2 routes: one for GET the value and other to PUT the value.

<camelContext id="example-camel-context" xmlns="http://camel.apache.org/schema/blueprint">
   <route id="rota-datagrid-get" autoStartup="true" > 
      <from uri="timer://foo?fixedRate=true&amp;period=5000"/>
      <setHeader headerName="CamelInfinispanKey">
      <to uri="infinispan://localhost?cacheContainer=#cacheManager&amp;cacheName=redhat-test&amp;command=GET"/> 
      <log message="TEST VALUE: ${body}"/>
   <route id="rota-datagrid-put" autoStartup="true"> 
      <from uri="timer://foo?fixedRate=true&amp;period=5000"/>
      <setHeader headerName="CamelInfinispanKey">
      <setHeader headerName="CamelInfinispanValue">
      <to uri="infinispan://localhost?cacheContainer=#cacheManager&amp;cacheName=redhat-test&amp;command=PUT"/>
      <log message="PUT - TEST is REDHAT"/>

If you are using JBoss Fuse, don’t forget to enable camel-jbossdatagrid component, running this command in JBoss Fuse console:

JBoss Fuse...> osgi:install -s mvn:org.apache.camel:camel-jbossdatagrid:2.17.0.Final-redhat-2

Tuning JBoss Data Grid / Infinispan TCP connection

If you are struggling to extract performance from JBoss Data Grid / Infinispan, there is a simple task that you can do to boost the TCP connection by increasing the tcp buffer.

In the JBoss Data Grid host, edit the /etc/sysctl.conf file:

$ vim /etc/sysctl.conf

and add the following content in the end of file:

Now apply the changes
$  sysctl -p
and confirm the changes by running the commands:
 sysctl -a | grep net.core.rmem_max
 sysctl -a | grep net.core.wmem_max
Now, edit the jgroups subsystem on JBoss Data Grid server  by running the following commands. First, open the command line interface:
% $JDG_HOME/bin/cli.sh --connect=localhost:9999
Edit the send_buf_size property from tcp stack:
and edit the send_buf_size property from tcp stack:
 Restart the server. Just that! 😉

Getting starting with Container Development Kit (CDK)

RHOCPIn simple words, Container Development Kit, or just CDK, is a platform that enable you to run Openshift locally. Openshift is a open source container platform created by Red Hat that relies in base container technologies like Docker and Kubernets.

The CDK installation procedure is not simple and varies according to the operating system used, in this post I’ll show how to install CDK in a Linux environment, most specifically in Fedora 25 operation system.

All you need to install are:

  1. Docker;
  2. Virtualization packages;
  3. Vagrant;
  4. Container Development Kit
  5. OCP client tools.

Installation Instructions


Docker is available in Fedora’s software packages repositories, so all you need to do to install is run the following command:

$ sudo dnf install -y docker

After the command completes,  you need to enable and start Docker as a service:

$ sudo systemctl enable docker.service
$ sudo systemctl start docker.service

To see if the docker daemon is running:

$ sudo docker info

The docker installation varies from system to system, so, if you are not using Fedora distribution, check the Docker documentation to find out how to install it to your own system.

Virtualization Packages

You need a virtualization tool to run Vagrant so you can run CDK.   You can choose vmware, vitualbox or libvirt as you virtualization technology. In this post I choose libvirt, because the packages are already available in Fedora packages repositories. So to install libvirt in your host, just run the following commands:

$ sudo dnf install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm

You can have some problems with some ruby gems, like nokogiri and ruby-libvirt, so, to prevent that to happen, install these follow packages too:

$ sudo dnf install libxslt-devel libxml2-devel libvirt-devel libguestfs-tools-c ruby-devel gcc

Start and enable libvirt as a service:

$ systemctl start libvirtd
$ systemctl enable libvirtd


Vagrant is a tool for building and managing virtual machine environments in a single workflow. Theses tool is the right tool to simulate production environment in the developer machine.

To install it just run the command:

$ sudo dnf install -y vagrant

 You need to install all Vagrant required plugins too:

$ sudo vagrant plugin install vagrant-libvirt vagrant-registration vagrant-service-manager vagrant-sshsf

To see if everything is okay with Vagrant:

$ vagrant global-status

Container Development Kit

All the previous steps were to arrive at this moment, but before we start to run the installation commands, you need to download Red Hat Container Tools  and RHEL 7.3 for libvirt at Red Hat Developer Site CDK download page. In the time of this write, the  current version are 2.4.

First thing is to install the RHEL 7.3 Vagrant Box at your machine:

vagrant box add --name cdkv2 rhel-cdk-kubernetes-7.*.x86_64.vagrant-libvirt.box

In the installation process, you’ll be prompt to inform your credentials to register RHEL. Just inform the credential that you create to access the Red Hat Developer Site.

To see if the box was installed:

vagrant box list

Now unzip the CDK file that you just downloaded in a directory that I’ll call $CDK_HOME. Change your working directory to:

$ cd $CDK_HOME/cdk/components/rhel/rhel-ose/

Now, the only thing that you need to do is to start the CDK Box:

$ vagrant up

Now you have a Openshift environment working in your local machine. 🙂

Go to this address to open the web console:

The preconfigured users are:

  • User: openshift-dev Password: devel    – to log as a developer;
  • User: admin Password: admin – to log as a administrator.

OCP Client Tools

Finally, the last tool that you need to install is the oc client tool, so you can interact with Openshift from a terminal window. All you need to do is to use Vagrant to that. Just run:

$ vagrant service-manager install-cli openshift

Now you can login in Openshift:

$ oc login


Have fun 😉

Getting starting with JBoss A-MQ 7

JBoss A-MQ 7 is the new version of messaging family middleware products from Red Hat that are based in the Apache ActiveMQ Artemis upstream project. This new version is the result of the donation of the Red Hat own message product HornetQ to the Apache ActiveMQ  community in a joint effort to create  the new version of the already acclaimed ActiveMQ broker.

To getting starting with the JBoss A-MQ 7 messaging broker you need to first create a new account in Red Hat Developers program web site and download the last version of the version 7 branch of the product and unzip it in any folder that you want. I’ll call this folder as $ARTEMIS_HOME.

There are 3 steps that you need to do to start using the JBoss A-MQ 7 broker, they are:

  1. Create the broker;
  2. Start the broker;
  3. Use the broker.

Create the broker

When you unzip the JBoss A-MQ 7 package from the project website, there is no broker installed and configured in the installation directory, neither a default one. The package only contains scripts, libraries, examples and other support files that are used by the product, but it not contains a broker at all. So, the first thing that you need to do is to create a broker. To do that, you need to follow this steps:

Open a terminal window and go to $ARTEMIS_HOME folder, in this directory call this command (Here I’m using Linux bash style, if you are using Windows terminal, just convert  the command to Windows syntax):

$ bin/artemis create --user admin --password pass --role admin --allow-anonymous /opt/artemis/broker01

This command create a broker in /opt/artemis/broker01 folder, with the admin default user with admin role and pass as their password. This broker also accepts anonymous users to connect to the broker from localhost. I’ll call the folder where the broker is installed as $ARTEMIS_INSTANCE.

The options that I passed to create the broker is only the required ones. There is a lot of others options that you can use to better control the process of broker creation. If you want to know which options exists, run this command in $ARTEMIS_HOME folder:

$ bin/artemis help create

The $ARTEMIS_INSTANCE contains this folders:

  • bin : contains the broker scripts;
  • data: folder that contains the runtime persisted data, like messages sended, queues etc;
  • etc: contains the broker configurations files;
  • log: contains the log file generated by the broker;
  • tmp: contains temporary files

So, now you have a broker installed.

Start the broker

To start the broker you only need to run this command from the $ARTEMIS_INSTANCE folder.

$ bin/artemis run

Be aware that you run this command from $ARTEMIS_INSTANCE folder, not  $ARTEMIS_HOME instance. If you run this command from  $ARTEMIS_HOME folder, the command will not work.

After you run this command a console log will appears and the broker will be running.

Use the broker

Now it’s time to you to start use your broker, there is a lot of client libraries and protocols that you can use to start sending and receiving message from the broker, like default Java JMS clients, AMQP client, STOMP etc.

For now I’ll only use the default test only scripts to send and receving simple text messages from the broker instance.

To send 1000 messages to the broker I only need to run this command from $ARTEMIS_INSTANCE folder:

$ bin/artemis producer

The output of this command will be something like this:

Producer ActiveMQQueue[TEST], thread=0 Started to calculate elapsed time ...

Producer ActiveMQQueue[TEST], thread=0 Produced: 1000 messages
Producer ActiveMQQueue[TEST], thread=0 Elapsed time in second : 6 s
Producer ActiveMQQueue[TEST], thread=0 Elapsed time in milli second : 6356 milli seconds

To consume the 1000 messages that I sent, run this command:

$ bin/artemis consumer

The output of this command will be something like this:

Consumer:: filter = null
Consumer ActiveMQQueue[TEST], thread=0 wait until 1000 messages are consumed
Consumer ActiveMQQueue[TEST], thread=0 Received test message: 0
Consumer ActiveMQQueue[TEST], thread=0 Received test message: 1
Consumer ActiveMQQueue[TEST], thread=0 Received test message: 999
Consumer ActiveMQQueue[TEST], thread=0 Consumed: 1000 messages
Consumer ActiveMQQueue[TEST], thread=0 Consumer thread finished

Managment Console

One last thing that I want to show you. The JBoss A-MQ 7 have a management console that have a lot of visual tools to help you to manage the broker from a web interface without the need of the broker administration client libraries. The console are based in the Hawt.io project and is only available in JBoss A-MQ 7 project, not in Artemis version, and can be accessed only from localhost, the address are http://localhost:8161/hawtio/.

If you want access the console outside the localhost domain, you need to edit the $ARTEMIS_INSTANCE/etc/bootstrap.xml file and change the web bind address to the correspond bind IP that you want to expose your console.

Now you have a broker that is created, running and receiving and sending messages. Have fun 🙂