Tag Archives: mqtt

IoT reference architecture for the edge domain

During my talk at Thingmonk last week, I showed the snippet of an IoT reference architecture that I developed covering the edge gateway domain.

Since I’ve yet to find amongst the many published IoT reference architectures any that decompose the edge and because several people asked for a copy, I thought I’d share it here for people to reuse. It’s simple but hopefully useful, and provides a starting point for proper design of solutions that exploit the edge.

Architecture overview diagram

iot-edge

Explanation of capabilities

The reference architecture shown above describes the key set of technology and organisational capabilities required in the deployment of edge applications.

Technology

  • Physical security mitigates the risk of tampering when devices are deployed in the field.
  • Device platform security protects the software platform, application code and data from unauthorised access.
  • The Device provides the hardware platform for application code and logic deployed at the edge.
  • Analytics models describe deployed analytics logic consumed by analytics runtimes in the edge software platform.
  • A Local Area Network provides the mechanism for the device to communicate with locally deployed applications, sensors and actuators, e.g. Bluetooth Low Energy, Zigbee etc.
  • Local Monitoring and Management tools enable administration and break/fix by local technicians servicing the installed hardware and software.
  • A Remote Monitoring and Management agent on the device enables diagnostics, monitoring and control of edge devices from the centre. This would be the preferred mode of operation since it does not require physical access to the deployed system.
  • Sensors and Actuators provide the link between the assets themselves and the device.
  • Assets monitored and controlled by the edge installation — i.e. the “things”.
  • Local applications support business operations conducted in the field.
  • An Application Runtime provides the programming environment for application logic deployed at the edge, e.g. NodeRED.
  • Control code is application logic deployed in the Application Runtime to orchestrate interactions with sensors and centre.
  • Sensor SDKs (Software Development Kit) facilitate development of sensor-driven applications to run at the edge by providing developer-friendly programmatic access to the sensor hardware.
  • Back-end SDKs facilitate communication with the centre by providing developer-friendly programmatic access to services at the centre and/or provided by third-parties.
  • A Wide Area Network connects the device to the data centre, for example via a cellular network or via wifi to the corporate network.

Organisational

  • Device and asset management is the central service management capability that deals with the ongoing monitoring and support for the hardware and software installation in the field.
  • Device installation and maintenance is the field-based service that installs and maintains the physical device, sensors and assets deployed in the field.
Advertisements

Visualising data using MQTT and IBM Mashup Center

I’ve spoken before about how MQTT and Web 2.0 technologies can be used to extend the reach of the enterprise. When you put those pieces together, powerful possibilities emerge. We can reliably gather data from the remotest parts of the network (e.g. power stations, oil rigs, underground pipelines) and now thanks to mashup technologies rapidly put that data into the hands of users to manage their business. I’ve shown how this is possible by demonstrating how data from MQTT can be received by a messaging server and rapidly consumed into a functioning business application. I’ve uploaded a video to YouTube for posterity (hence the slightly sketchy video quality), but those inside the IBM firewall can obtain a higher quality version if they would like to see it.

Quick guide to using SSL/TLS support in MQTT and the micro broker

I have recently been using the new TLS/SSL support for the micro broker and MQTT v5 client introduced in Expeditor 6.2. I have recorded the following instructions as a quick-start guide to create a simple, server-authentication SSL connection using a self-signed certificate. Further information on configuring SSL for micro broker can be found on the Redbooks Wiki.

1. Create key pair and keystore for the micro broker

Find the binaries directory for the JRE you are using. For the Expeditor DRE (J2SE) runtime, for example, they can be found in a folder named similar to (may vary slightly with build levels)  C:\Program Files\IBM\Lotus\Expeditor\rcp\eclipse\plugins\com.ibm.rcp.j2se.win32.x86_1.6.0.20081029a-200811140851\jre\bin.

When in the JRE binaries directory, issue the following command to create a certificate:

keytool -genkey -alias sampleCert -keystore c:\sample\sampleStore.jks -keypass default -storepass default -dname “CN=Sample, OU=PVC, O=IBM, C=UK” -keyalg RSA

Make sure you are using the same JRE to create the keystore as you use to run the micro broker as the keystores aren’t compatible across different JREs.

2. Write the logic in a plugin to create an SSL listener on the broker

This is achieved using the micro broker administrative API. The following will add an SSL listener alongside the default non-encrypted listener.

public static final String BROKER_INSTANCE_NAME = “Sample_Broker”;
private static final int  BROKER_SECURE_PORT = 8883;
private static final String BROKER_KEYSTORE = “C:/sample/sampleStore.jks”;
// Password as per the keytool sample above
private static final char[] BROKER_KEYSTORE_PASSWORD ={‘d’,’e’,’f’,’a’,’u’,’l’,’t’};

BrokerDefinition def = brokerFactory.createBrokerDefinition(BROKER_INSTANCE_NAME);
def.setDataDirectory(Platform.getInstanceLocation().getURL().getFile()+”/microbroker”);
def.setAutoStartEnabled(false);
broker = brokerFactory.create(def);

// Now start up
broker.start();
// Now add SSL support — we want both SSL and non-SSL support
// otherwise we could have specified an SSL-only approach with
// def.setDefaultListenerSecure(true);
// and then set the SSL definition as per the following snippet.
// Note that the broker must be started to add the SSL listener
// if we want both secured and non-secured listeners. The broker
// can only be configured with a single listener initially.
Communications brokerComms = broker.getCommunications();

MQTTTCPListenerDefinition mqttld =
brokerComms.createMQTTTCPListenerDefinition(BROKER_SECURE_PORT);
mqttld.setSecure(true);
ServerSSLDefinition ssld = mqttld.createSSLDefinition();
ssld.setKeyStore(BROKER_KEYSTORE);
ssld.setKeyStorePassword(BROKER_KEYSTORE_PASSWORD);
mqttld.setSSLDefinition(ssld);
brokerComms.addTCPListener(mqttld);

Once your application has been started up (and hopefully your broker configured and running successfully) you
can check it is running by using a command such as (on Windows):

netstat -aN | find “LISTEN”

and see something listening on port 8883 (and 1883 for that matter if you use the above snippet).

3. Set the connection URI of the MQTT v5 client to use SSL

In the simple (server authentication) case, an MQTT v5 client can be connected using SSL simply by modifying the MQTT URI used at connection time from something like

tcp://localhost:1883

to

ssl://localhost:8883

The MQTT client will then know to attempt the connection using SSL.

A note on using self-signed certificates

By default, Expeditor running in GUI mode will offer a dialogue to the user prompting them as to whether they wish to accept the server certificate or reject it since the provider is not a recognised certification authority (i.e. it’s a certificate we’ve created ourselves for testing).

image

There are a couple of things to watch out for here. If the GUI is not yet available (i.e. the MQTT client application has loaded and attempted to connect before the GUI has loaded) or is not available (i.e. Expeditor is running “headless”) by default the Expeditor SSL support will automatically reject certificates from non-trusted certification authorities.

In order to prevent this issue, we must modify the plugin_customization.ini file
which by default is installed on Windows in

C:\Program Files\IBM\Lotus\Expeditor\rcp

Add the following line to always allow certificates from unknown certification authorities:

com.ibm.rcp.security.jceproxy/ssl.unknowncert.action=ALLOW

Note that an addition command line parameter is required in any Run Configuration profiles using the Expeditor test environment to point at the plugin_customization.ini file:

-plugincustomization “C:/Program Files/IBM/Lotus/Expeditor/rcp/plugin_customization.ini”

image

WARNING: this will cause Expeditor to always accepted untrusted certificates — this tip is intended for development purposes and should NOT be used in production deployments.

Full details of Expeditor’s SSL configuration options can be found on the Expeditor Info Center.

Using the new MQTT v5 client in Expeditor 6.2

With the launch of Expeditor 6.2 comes the next generation of the MQTT client. MQTT is IBM’s specialist messaging protocol designed for use in fragile or expensive networks (e.g. mobile or satellite links) and in constrained devices such as sensors and mobile devices. This latest version (version 5 of the protocol) adds a number of features including the following:

  • Support for the point-to-point (i.e. queues) as well as the traditional pub/sub messaging paradigm.
  • Different payload types such as a textual string in addition to the basic byte array of the previous client.
  • The ability to specify whether an individual subscription is durable (i.e. survives a disconnection and continues to collect messages) or not on a per-subscription basis. In the previous incarnation, the durability of a client’s subscriptions was specified for the whole client. This could mean resources within the broker were consumed by unnecessarily durable subscriptions.
  • The ability to connect to a messaging server over SSL and using authentication credentials.
  • A variety of additional header information for messages including expiry and priority as in JMS.
  • The ability to start and stop message delivery without having to disconnect and reconnect again. This allows the application to control the flow of messages without the overhead of a full connection handshake each time.

Full details can be found in the Javadoc but I’ve included a simple sample here to get you started.

Install the MQTT v5 client into the Expeditor client runtime

In Expeditor 6.2, the MQTT v5 client is not installed by default with the base client installer. You will need to install it as an additional feature from the update site contained in desktop/updates/platform folder in the install media, even for use in the toolkit. A screen shot showing which feature is required is shown below.

image

Sample MQTT application

The following examples show a class that connects to a broker and subscribes durably for messages and a class that publishes a message. The subscriber makes use of the message delivery start/stop feature to enable message delivery only when both subscriptions are in place. You will see that I’ve included some tests that determine the type of payload of a given message as well.

Notice the different package/plug-in name for the v5 MQTT client.

import com.ibm.micro.client.MqttCallback;
import com.ibm.micro.client.MqttClient;
import com.ibm.micro.client.MqttConnectOptions;
import com.ibm.micro.client.MqttDeliveryToken;
import com.ibm.micro.client.MqttDestination;
import com.ibm.micro.client.MqttException;
import com.ibm.micro.client.MqttMessage;
import com.ibm.micro.client.MqttSubscriptionOptions;

public class MqttSubscriber implements MqttCallback {

private MqttClient mqttClient = null;
private String TOPIC_SUFFIX_ALERTS = “alerts”;
private String TOPIC_SUFFIX_DATA = “data”;
public MqttSubscriber() {
super();
}
/* MQTT Client API */
public void connectionLost(Throwable arg0) {
//
System.out.println(“Connection to the micro broker lost”);
}

public void deliveryComplete(MqttDeliveryToken arg0) {
//
}

public void deliveryFailed(MqttDeliveryToken arg0, MqttException arg1) {
//
}

public void messageArrived(MqttDestination destination, MqttMessage message) throws Exception {
System.out.println(“Message has arrived over MQTT.”);
String sourceTopic = destination.getName();
// In MQTT v5 we have different payload types.
if (message.getPayloadType() == MqttMessage.PAYLOAD_TEXT) {
String payload = message.getStringPayload();
System.out.println(“Message payload: “+payload);
} else {
// Enforce a “text only” policy.
System.err.println(“Message is not a text string.”);
}
}

public void start(String mqttUri, String name) throws MqttException {
mqttClient = new MqttClient(mqttUri, “Sub_Client”);
mqttClient.setCallback(this);
MqttConnectOptions options = new MqttConnectOptions();
// Wait until we’re ready to receive messages (overrides default)
options.setAutoStart(false);
// We don’t want our state cleaned up, we need to keep
// Durable subscriptions.
options.setPurge(false);
// Connect
mqttClient.connect(options);

MqttSubscriptionOptions subOpts = new MqttSubscriptionOptions();
subOpts.setDurable(true); // We want durable subscriptions
subOpts.setQos(2); // Once and once only delivery (same as V3 QoS).
mqttClient.subscribe(“acme/sample/”+name+”/”+TOPIC_SUFFIX_ALERTS+”/+”, subOpts);
mqttClient.subscribe(“acme/sample/”+name+”/”+TOPIC_SUFFIX_DATA+”/+”, subOpts);
// Ready to receive.
mqttClient.startListening();

System.out.println(“Connected to “+mqttUri);
}

public void stop() throws MqttException {
//
mqttClient.disconnect();
}

}

The following is the complimentary part of the sample that shows how to publish a message using the v5 client. This class is intended to be run as a simple command line utility.

import com.ibm.micro.client.MqttClient;
import com.ibm.micro.client.MqttException;
import com.ibm.micro.client.MqttMessage;
import com.ibm.micro.client.MqttTopic;

public class MqttPublisher {

public static void main(String[] args) {
try {
// Parameters: <uri> <clientid> <topic> <qos: 0,1 or 2> <string data>
MqttPublisher publisher = new MqttPublisher(args[0], args[1]);
publisher.publish(args[2], Integer.parseInt(args[3]), args[4]);
} catch (Exception ex) {
ex.printStackTrace();
}
}
private String uri = null;
private String clientId = null;
public MqttPublisher(String u, String c) {
uri = u;
clientId = c;
}

public void publish(String topic, int qos, String payload) throws MqttException {
MqttClient client = new MqttClient(“tcp://localhost:1883”, “Pub_Client”);
client.connect();
// Notice the new object model for destinations and messages
MqttTopic t = client.getTopic(topic);
// Create a string payload — v5 discriminates between payload types unlike v3.
MqttMessage message = new MqttMessage(payload);
message.setQos(qos);
t.publish(message);
client.disconnect();
}
}

Enjoy !

Making technology choices for client applications

When developing the end user aspect of a client/server SOA solution, there are a variety of possibilities in terms of the technology choice for the client application. The possibilities represent a spectrum of choices from browser based applications on the one hand to natively installed “fat” client applications. At one end, the evolution of Web 2.0 technologies and patterns for the browser environment means rich and aesthetically pleasing end-user applications can be delivered through the ubiquitous web channel. On the other hand, there are cases when the browser environment and web delivery model does not fully meet the application requirements and a richer client environment is required. As always there are several “shades of grey” available in between and each project will have its own specific requirements.

I’ve been asked a few times by customers as to how one determines the most appropriate approach, particularly those contemplating making the transition from traditional (“Web 1.0”) applications into the RIA world. I’ve attempted to distil down in this posting some of the high level considerations that I typically give in response when I am posed this question.

As an IBMer working in this space you’ll see that in some cases I’ve articulated the thinking in terms of IBM’s recommended products and technologies in this space, particularly

  • The Dojo Toolkit for building rich, browser-based applications (RIAs).
  • Lotus Expeditor for building and managing rich client desktop and mobile applications.

Application “posture”

The nature of the application concerned provides an indicator as to the type of client technology required to construct it. “Posture” is a term coined by Alan Cooper in his book “About Face” which is used to describe the behaviour of an application in terms of their interaction with the user. Cooper defines three postures to describe interactive applications: sovereign, transient and daemonic. Each posture has different requirements in terms of the fidelity of the user interface technology. Sovereign and transient applications are the most commonly applicable to business applications.

Sovereign applications are those that will monopolise the user’s focus such as a word processor or banking teller application. Speed and power are typically of the essence and users of sovereign applications become expert users quickly due to the sheer amount of time spent with the application. User interface short cuts using the keyboard are a common feature. The sovereign application will typically expect to benefit from the full range of user interface services available on the machine. For this reason, rich client technologies (such as Expeditor) that are installed natively (i.e. tighter integration with the operating system and hardware) are often the most appropriate choice for this posture. The richness now possible in the browser with AJAX frameworks (such as Dojo) and other RIA technologies are starting to blur this boundary, however, though the web browser itself will still ultimately restrain the application’s access available to the native UI.

By contrast, transient applications are those that come and go and respond to a particular request and service a particular set of constrained goals. Good examples of this are a consumer banking or insurance quotation application. Typically users visit less often and as such ease of use and instruction is of higher concern than speed. From a posture perspective, the browser comes into contention due to the lesser need for full control over the desktop, reach and easy-to-use presentation and controls.

Connectivity available to the application

Connectivity is one of the most important considerations for the client technology choice for the application. Will the user be permanently connected to a network or does the nature of their role mean that they will be connected only sometimes when using the application?

Whilst AJAX techniques changed the architecture and model of interaction between the client and server (i.e. more functionality is loaded for a given single page request), web applications remain inherently a “connected” technology. A typical operating environment of a desktop PC connected via a high-speed LAN such as wired Ethernet or Wifi is a sound infrastructure platform for a web-delivered application. Corporate intranets are a prime example of this type of application.

When the connectivity model for the application is “sometimes connected”, the application must function irrespective of whether a network is available. This requires a richer environment to insulate the application from the network breakage. For example, we might want to reliably batch up requests to the server until such a time as the network is available again. A good example of such an application is a mobile sales representative in the field collecting orders from customers where typically they are not connected to a network or are reliant on patchy network coverage. In this scenario, the representative typically collects the orders during the day and synchronises with the back-office in the evening when connected again. Web browsers do not as standard offer sufficiently rich capabilities to support this mode of operation.  For example, without the addition of additional plug-ins, network connectivity is limited to HTTP and data cannot easily be persisted whilst working offline. Rich client technologies like Expeditor can store data to disk or in a relational database and leverage more sophisticated connectivity technologies to reliably store and forward the data to the enterprise when connected.

Access to the operating system and hardware

Closely linked to the question of connectivity is the requirement to be able to communicate with resources native to the installation environment. Files on the local file system are a simple example, Windows registry entries are another. Another common scenario is accessing specialised devices connected to the desktop machine, for example a chip and PIN reader in a banking scenario.

Where access to the native file system or devices is required, the browser becomes an increasingly problematic environment. Signed Java Applets are intended to allow increased access to native resources but can be complex to install and configure correctly, particularly at enterprise scale due to the variety of JVM and browser combinations. As we have already noted, the primary I/O mechanism of a web browser is HTTP which is well suited to consuming documents and feeds but less well suited for more complex resources such as proprietary hardware device drivers. Furthermore the browser is restricted by the same-domain security restriction that further limits its capabilities even with HTTP. A browser plug-in would be required to extend the browser to provide richer I/O support which is not only a relatively complex task but also adds dependencies on both the browser brands and operating systems. Rich clients typically offer better access to more primitive interfaces of the native platform, devices and so on through richer programming environments such as Java. In the case of Expeditor the benefits of Java are further augmented by a standardised (OSGi-based) plug-in architecture facilitating the development and reuse of components.

Management model

A key driver in terms of the business case for the client technology solution is the target infrastructure for the application, and its associated cost to the business. The requirements for provisioning the application are a factor central in the cost/benefits analysis of the solution.

Natively installed applications generally require more time and supporting IT skills, both of which naturally increase the cost of ownership. Native applications offer the richest functionality but add a tight dependency on the operating system and hardware. Furthermore, some aspect of installation is always required which in an enterprise scenario will often require IT support and management along with the associated costs that implies.

Browser-based applications have a broader reach due to the higher level of abstraction of the application tier from the operating system. Since the mid-1990s the presence of a web browser can be taken for granted on every desktop and as such web applications represent what is known as the “zero footprint” option since they require no native installation process at the client. The browser simply requests the application via a URL and the latest version of the application is downloaded on demand simply by virtue of  being there when the request was made. There is little or no associated cost of deployment from the client perspective when introducing new function as it will be provisioned the next time the application is requested.

There is, however, an approach that blends the richness of a native application with the lowered cost of ownership of a centralised management model. Expeditor provides what is known as a managed client for desktop PCs and mobile devices. The native installation of the Expeditor client installs a base application container into which the functional components of an application can then be deployed and managed from a central server. In this way new applications or updates to existing business function can be provisioned without the need for intervention by the end user or IT support.

The legacy on the desktop

When considering deployment infrastructure we also need to consider any legacy applications that a client solution needs to accommodate. In a “greenfield” environment (i.e. where we are developing a new solution from scratch) it is very often the case that a homogeneous technology platform can be adopted, such as a browser-based web portal or such like. When there are existing applications that are too costly to replace (i.e. “brownfield”) a solution is required to accommodate the legacy applications alongside new functionality and provide a transition path into a common technology platform.

If there are existing applications to accommodate that are delivered via web-based channels, then a web portal can be deployed to aggregate the applications together within a single browser application. Depending on how the original applications are constructed some functional integration can be achieved through the portal infrastructure. At the very least the applications can be functionally grouped within the user interface according to their function to help streamline the task flow. This allows some degree of integration for the web channel without a “rip and replace” of the legacy applications.

In many scenarios, however, the incumbent applications are implemented in a variety of technologies e.g. web, native applications, terminal sessions that cannot easily be aggregated with a purely browser-based portal. Consider a call centre where often a user task involves interacting with a number of individually installed desktop applications, often due to the organic growth of the desktop platform with point application choices over a period of time. In such environments, Lotus Expeditor provides support for efficient composition of a heterogeneous desktop through its Composite Applications Environment (CAE). In CAE, applications built using different technologies can be integrated at-the-glass in a similar fashion to a portal (or indeed mashup) using a simple wiring model and GUI tools – i.e. integrating data from one application with that of another without the need for code or ripping and replacing the applications. This can bring not only the benefits of application integration in terms of time and cost savings, but also provides a strategy to transition the legacy applications over time onto a common technology base whilst the enterprise can protect its investment in the existing applications.

Communication with existing business services

In an SOA the client application must be able to invoke the underlying services to fulfil the business function. The popularity of AJAX applications in the web browser environment has seen a growth in popularity of simple and lightweight service endpoints exposed using HTTP and REST. This style of SOA (known as Web-Oriented Architecture or WOA) is well suited for the presentation of information in the web browser and for invocation of logic in the application server where a particularly high quality of service between client and server is not required (i.e. HTTP is good enough). I’ve previously discussed WOA in an earlier posting and the value it can add to SOA.

In some cases, however, applications may have higher quality of service requirements for other more complex protocols to underpin the business function. For example a retail point of sale application might use reliable messaging via a JMS messaging server to ensure that details of a transaction are delivered to a payment processing service. Typically such providers require a richer programming runtime than the browser such as Java or C. Similarly the application may require direct transactional access to a database using JDBC. Again, Expeditor in its various flavours provides such an environment though its support for Java and enterprise standards such as JMS and JDBC, and connectivity into enterprise middleware and database servers.

Availability of skills within the enterprise

The pragmatics of delivery have significant impact not only on the technology choice but also the associated costs and benefits of the solution. For example, selection of an unfamiliar technology platform will require the development or acquisition of new skills. In some cases the requirements of the application may dictate a particular technology choice and the benefit to the business of the solution will be sufficiently significant that it outweighs this cost. By the same token, the ability to leverage skills already available within the organisation may reduce the associated cost of development and ownership to such a degree that the benefits become more compelling in a borderline business case.

HTML skills have become widely available and are common entries on the resumes of application developers. In this respect the development of HTML-based applications for browser environments is an attractive proposition financially when planning for the development and maintenance of the solution. HTML and Javascript are open, supported and widely accepted technologies for the presentation tier of an application. Their adoption in an application represents less of a strategic risk to the enterprise since they are not tightly coupling the application to a native platform or proprietary programming model. Furthermore, proprietary technologies (such as Flash) can be accommodated within it. Toolkits such as Dojo provide comprehensive programming models and rich widget libraries as accelerators to shorten time to value in developing the application.

When a richer client platform is required, the provision of a web container inside the Lotus Expeditor client provides the capability to exploit the prevalent skills and short time to value of the browser environment in conjunction with the additional capabilities of the rich client. The example below is from a scenario where desktop client applications in a retail branch connect to a local micro broker in an edge SOA solution.

image

Using the micro broker JMS client

The following is a brief example of how to instantiate and use the micro broker’s JMS client in an Expeditor environment.

Setting project dependencies

You will need the following plug-in dependencies specified in your MANIFEST.MF:

  • com.ibm.mqttclient.jms
  • com.ibm.micro.utils
  • com.ibm.pvc.jms

Creating your JMS Connection Factory programmatically

The micro broker provides a factory (a “factory factory”!) for creating JMS Connection Factory instances. This factory is accessed as follows:

JmsFactoryFactory jmsFactory = JmsFactoryFactory.getInstance(MQTTConstants.PROVIDER_NAME);

// Obtain a connection factory
ConnectionFactory cf = jmsFactory.createConnectionFactory();
// We need to set the target endpoint on the connection factory
// Connecting to localhost over TCP
((JmsConnectionFactory) cf).setStringProperty(MQTTConstants.MQTT_CONNECTION_URL,MQTTConstants.MQTT_TCP_SCHEMA + "127.0.0.1:1883");

Note that we set some specific properties for MQTT, such as the connection URL. A client ID is mandatory for the micro broker JMS client too. Connection Factories can be created declaratively using an Expeditor Extension Point. Details of how to do this can be found in the Expeditor Info Center on this page.

Creating a JMS Connection

The main point to note here is that the micro broker requires a client ID to be specified.

// Now obtain a connection from the factory
Connection conn = cf.createConnection();
// A client ID is required
conn.setClientID("Pinger");
// Connection is ready for use.

A sample JMS application

// Connection Factory
JmsFactoryFactory jmsFactory = JmsFactoryFactory.getInstance(MQTTConstants.PROVIDER_NAME);
// Obtain a connection factory
ConnectionFactory cf = jmsFactory.createConnectionFactory();
// We need to set the target endpoint on the connection factory
// Connecting to localhost over TCP
((JmsConnectionFactory) cf).setStringProperty(MQTTConstants.MQTT_CONNECTION_URL, MQTTConstants.MQTT_TCP_SCHEMA + "127.0.0.1:1883");
...
// Now obtain a connection from the factory
Connection conn = cf.createConnection();
// A client ID is required
conn.setClientID("Pinger");
// Finally start the connection up
conn.start();
...
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(session.createTopic(MICROBROKER_TOPIC_NAME_OUTBOUND));
MessageConsumer consumer = session.createConsumer(session.createTopic(MICROBROKER_TOPIC_NAME_INBOUND));
...
TextMessage requestMsg = session.createTextMessage("{ \"telephone\" : \"04962 915000\" }");
requestMsg.setStringProperty("TargetFunctionName", "checkDSL");
producer.send(requestMsg);
...
Message responseMsg = (Message) consumer.receive(JMS_REQUEST_TIMEOUT);
...
session.close();
conn.stop();
conn.close();

Bridging to the WebSphere JMS provider from the micro broker

One of the key value propositions in the micro broker is its ability to connect the integration hub at the edge with an enterprise messaging server. The Bridge within the micro broker provides this capability in three flavours:

  • MQTT (v3) direct into another micro broker or Message Broker.
  • WebSphere MQ JMS – requires MQ JMS client to be packaged as a plug-in to Expeditor.
  • Third-party JMS providers using JNDI – requires the third-party JMS client to be packaged as a plug-in within Lotus Expeditor.

The third flavour above is what we use to connect the micro broker to the WebSphere JMS provider found in WAS, WESB and WPS.

image

This post describes how one configures the bridge to make use of the WebSphere JMS provider.

Note that Expeditor must be configured to use the Device Runtime Environment (DRE) J2SE JDK since the WebSphere JMS client requires a full J2SE runtime to work properly.

Creating the WebSphere JMS administered objects for JNDI programmatically

For the JNDI flavour of the bridge, a JMS Connection Factory and the target JMS Destinations must be made available using a JNDI provider. The Bridge is then configured with the appropriate JNDI keys to the administered objects. At runtime the objects are retrieved from JNDI by the Bridge using the keys provided. The JMS flavours of the micro broker both also require a “sync-queue” in order to honour once-only persistent qualities of service.

There are two approaches to creating the JMS administered objects. One is to use the WebSphere JNDI provider (JNDI over IIOP). To make use of this capability one needs to package the WebSphere JNDI client as an Expeditor plug-in. This is a good approach when Expeditor will be in close and reliable networked proximity to the WebSphere server since the creation of the JMS administered objects can be done automatically with the Messaging Destinations in WebSphere. Similarly use of a remote JNDI repository fits the JEE administrative model in that the objects can be managed independently of the application. In scenarios where the network may be unreliable (or indeed only available at certain times), an alternative approach is to create the administered objects programmatically and bind them into the built-in JNDI provider within Lotus Expeditor. This means even without a network at start-up time, the bridge can at least start, even if it cannot successfully connect straight away.

Packaging the WebSphere JMS client in Expeditor

To do this, we need to make the WebSphere JMS client available to the Lotus Expeditor runtime in the form of an Expeditor Feature.

A reduced footprint variant of the client is available as a feature pack for WAS. In order for these classes to be found by the Expeditor runtime, you will need to create a plugin in Eclipse containing the WebSphere JMS client JAR files, exporting all the interfaces (apart from javax.jms) contained in the JAR in the MANIFEST.MF for the plug-in.

There are a couple of very important points to note:

  1. You will need to make sure that the javax.jms package is NOT exported and any classes in this package are removed from the underlying JAR file. Expeditor has its own version of the JMS interfaces already exported in the platform and two plugins exporting the same interfaces cause JVM errors at runtime.
  2. Note that when you package your JMS client, the feature.xml file referencing the plug-in should have the unpack attribute set to true for the plugin containing the JMS client. In order for the interfaces contained within the underlying WebSphere JMS client JAR to be exported correctly when deployed, the JAR cannot be nested inside the plugin JAR. This is due to a limitation of how the OSGi class loader mechanism works.

Creating the administered objects

The following snippet shows how the WebSphere JMS administered objects are created and bound into the Expeditor JNDI repository.

public static final String JNDI_NAME_SIB_CONNECTION_FACTORY = "jms/SIBConnectionFactory";
public static final String JNDI_NAME_SIB_OUTBOUND_QUEUE = "jms/SIBOutQueue";
public static final String JNDI_NAME_SIB_INBOUND_QUEUE = "jms/SIBInQueue";
public static final String JNDI_NAME_SIB_SYNC_QUEUE = "jms/SIBSyncQueue";
public static final String JNDI_NAME_SIB_DEAD_LETTER_QUEUE = "jms/SIBDeadLetterQueue";

...

/**
 * Use the WebSphere JMS provider's Factory APIs to create the necessary objects
 * and bind them in JNDI.
 */
private void createAndBindSibJMSObjects() {
	try {
		// Create the ConnectionFactory
		com.ibm.websphere.sib.api.jms.JmsFactoryFactory jff = com.ibm.websphere.sib.api.jms.JmsFactoryFactory.getInstance();
		com.ibm.websphere.sib.api.jms.JmsConnectionFactory cf = jff.createConnectionFactory();

		cf.setBusName("SCA.APPLICATION.domino8Node01Cell.Bus");
		cf.setProviderEndpoints("localhost:7276:BootstrapBasicMessaging");

		// Create the request queue (messages on the outbound topic will be forwarded here)
		JmsQueue inQueue = com.ibm.websphere.sib.api.jms.JmsFactoryFactory.getInstance().createQueue(
					"PDARemoteSales.MQTT_JSONoverJMS_Export_RECEIVE_D_SIB");
		// Messages expiry after 3 minutes if not consumed.
		inQueue.setTimeToLive((long)180000);

		JmsQueue outQueue = com.ibm.websphere.sib.api.jms.JmsFactoryFactory.getInstance().createQueue(
		"PDARemoteSales.MQTT_JSONoverJMS_Export_SEND_D_SIB");
		// Messages expiry after 3 minutes if not consumed.
		outQueue.setTimeToLive((long)180000);

		JmsQueue deadLetterQueue = com.ibm.websphere.sib.api.jms.JmsFactoryFactory.getInstance().createQueue(
		"PDARemoteSales.Microbroker.DeadLetter");
		// Messages expiry after 3 minutes if not consumed.
		deadLetterQueue.setTimeToLive((long)180000);
		JmsQueue syncQueue = com.ibm.websphere.sib.api.jms.JmsFactoryFactory.getInstance().createQueue(
				"PDARemoteSales.Microbroker.SyncQueue");
		// Messages expiry after 3 minutes if not consumed.
		syncQueue.setTimeToLive((long)180000);

		try {
			InitialContext ctx = new InitialContext();
			ctx.bind(JNDI_NAME_SIB_CONNECTION_FACTORY, cf);
			ctx.bind(JNDI_NAME_SIB_INBOUND_QUEUE, inQueue);
			ctx.bind(JNDI_NAME_SIB_OUTBOUND_QUEUE, outQueue);
			ctx.bind(JNDI_NAME_SIB_SYNC_QUEUE, syncQueue);
			ctx.bind(JNDI_NAME_SIB_DEAD_LETTER_QUEUE, deadLetterQueue);

			System.out.println("All JMS SIB objects bound");
		} catch (NamingException ex) {
			System.err.println("Failed to bind JMS instances.");
			ex.printStackTrace();
		}

	} catch (JMSException e) {
		System.err.println("Failed to create JMS instances");
		e.printStackTrace();
	}

}

The first portion of the code uses the WebSphere JMS client’s Factory classes to instantiate the JMS administered objects with the latter portion binding them into Expeditor’s JNDI provider (note the Expeditor InitialContext will be used by default within an Expeditor environment).

Configuring the micro broker Bridge

The following snippet shows how the Bridge is configured using the micro broker’s administrative API. In this scenario we are bridging into WPS sending messages from a topic in the micro broker into a queue in WPS and from a queue in WPS back to a topic in the micro broker. By this point we have looked up a LocalBroker instance from the micro broker BrokerFactory service.

// Obtain a handle to the broker's bridge
Bridge bridge = broker.getBridge();

// Create a pipe definition -- this is the root of all bridge links
PipeDefinition pipe = bridge.createPipeDefinition("sibPipe");

JNDIConnectionDefinition connectionDefinition = bridge.createJNDIConnectionDefinition(pipe.getName()+"_Connection");
// Set the Initial Context to be used by the bridge to retrieve the administered objects
connectionDefinition.setInitialContext("com.ibm.pvc.jndi.provider.java.InitialContextFactory"); // uses the parameterised initial context factory i.e. XPD
connectionDefinition.setConnectionFactoryKey(JNDI_NAME_SIB_CONNECTION_FACTORY);
connectionDefinition.setDeadLetterKey(JNDI_NAME_SIB_DEAD_LETTER_QUEUE);
connectionDefinition.setSyncQKey(JNDI_NAME_SIB_SYNC_QUEUE);
connectionDefinition.setURL("none"); // Not meaningful for Expeditor JNDI but *is* still required to be set.

pipe.setConnection(connectionDefinition);

// Create an outbound flow that reads from a topic called "localoutbound" and
// puts to the remote queue.
FlowDefinition outbound = bridge.createFlowDefinition("outboundFlow");
// Set the source to be a single topic, "localoutbound"
outbound.setSources(new DestinationDefinition[] { bridge.createTopicDefinition("localoutbound")});
// Set the destination to be a queue on the remote WPS, using the reference bound in JNDI
outbound.setTarget(bridge.createJNDIDefinition(JNDI_NAME_SIB_INBOUND_QUEUE));
// Add the flow to the pipe
pipe.addOutboundFlow(outbound);

// Create an inbound flow that reads from a remote queue called "outbound" and
// puts to a topic called "localinbound"
FlowDefinition inbound = bridge.createFlowDefinition("inboundFlow");
// Set the source to be a queue on the remote WPS, using the reference bound in JNDI
inbound.setSources(new DestinationDefinition[] { bridge.createJNDIDefinition(JNDI_NAME_SIB_OUTBOUND_QUEUE)});
// Set the destination to be a topic called "localinbound" on the local broker
inbound.setTarget(bridge.createTopicDefinition("localinbound"));

// Add the flow to the pipe
pipe.addInboundFlow(inbound);

// Pipe is configured, add it to the bridge
bridge.addPipe(pipe);
// Start the pipe
bridge.startAllPipes();

The pipe is now ready for use.