Category Archives: Snippets

Calling a REST service from Swift — a quick example

I figured this would be a recurring theme, so thought I’d write it down for posterity, not least so I didn’t lose it. The scenario is a common one — we have a mobile app for iOS written in Swift where in response to some kind of user trigger we need to go off to the server to do some work via a REST endpoint using JSON as our protocol format. Here’s a snippet I created to do exactly this, a variation on a couple of similar examples but with a couple of notes of things I found along the way.

Let’s assume we’re in a simple View Controller where we’re responding to stimulus from the user in the form of a button. I have an action handler called getData to respond to a button push.

@IBAction func getData(sender: AnyObject) {
   dispatch_async(dispatch_get_main_queue(), {

You will notice a couple of things. Firstly we’re calling a another function on the controller that will actually do the heavy lifting. I’ve separated that out purely to make things easier to read and follow. The second thing is that I’m using the Grand Central Dispatch mechanism in Swift to spin off the action into a separate thread. This follows general good practice for user interface that you shouldn’t do work on the main thread that might block for some period of time, such as writing to the network or disk. Doing so could cause your app to hang and potentially be shut down by iOS as being unresponsive as a result (Apple themselves explain this nicely).

So now to the main guts of the code. In my simple case here I’m doing an HTTP GET of a JSON feed that returns an array of items that looks a bit like this:

    "device": "TEST",
    "event": "Testing",
    "date": "2014-08-12T08:31:25.449Z",
    "_id": "53e9d0ddf5d6a92100b8254c"
    "device": "TEST",
    "event": "Testing2again",
    "date": "2014-08-12T09:25:25.062Z",
    "_id": "53e9dd85f5d6a92100b8254d"

Here’s the function that does the work of going off to the server to get the feed.

func getJsonData() {
   var urlPath = "http://myserver/myaudits"
   var url = NSURL(string: urlPath)
   var session = NSURLSession.sharedSession()
   var task = session.dataTaskWithURL(url!, 
      completionHandler: {
         error -> Void in
            println("Task completed")
            if(error != nil) {
            var err: NSError?
            var results = NSJSONSerialization.JSONObjectWithData(data, 
               options: NSJSONReadingOptions.MutableContainers, 
               error: &err) as NSArray
            if(err != nil) {
                println("JSON Error \(err!.localizedDescription)")

            println("\(results.count) JSON rows returned and parsed into an array")
            if (results.count != 0) {
                // For this example just spit out the first item "event" entry
                var rowData: NSDictionary = results[0] as NSDictionary
                var deviceValue = rowData["device"] as String;
                println("Got \(deviceValue) out")
            } else {
                println("No rows returned")

Note that I’m explicitly casting the parsed JSON as an NSArray. One thing I discovered is that Swift isn’t massively tolerant if you get this wrong, for example if the JSON resolves to an Object, then you need to cast to an NSDictionary. Rather than return a useful message, it simply crashed the app. Now of course this was due to a genuine developer gaffe, but still it soaked up a fair amount of time trying to work out what was wrong.

With that, you should be all set. Hope this is useful.

Parsing LLAP packets using NodeRED on the Raspberry Pi

It’s taken a few weeks but here is the next instalment in my promised series of articles. Part of the scenario requires me to detect a particular scenario using a remote sensor and relay it on to a variety of other systems via my Raspberry Pi. The sensor itself is assembled using the rather brilliant Ciseco Wireless Inventor’s Kit which provides all you need for prototyping remote sensors and transmitting data over a short range radio link back to a waiting Pi.

It’s a pretty simple yet powerful set of kit. You build your sensor using the breadboard and XinoRF module supplied which communicates with a Slice of Radio transceiver module you attach to the Pi’s GPIO pins. Your applications then send and receive data to and from the remote sensor via the Pi’s serial port and you’re away. The protocol used for the serial communications is designed by Ciseco and is called Lightweight Logical Application Protocol (LLAP). It takes the form of a simple fixed-field text datagram and as such can be parsed by pretty well any programming language that can interrogate the serial port.

In my case I’m using NodeRED to parse the LLAP packets so I can invoke other services in response to an event triggered via a sensor. The event is detected by a switch (a pressure mat in fact) that sets the D03 pin on the XinoRF module to be “high”. The nice thing about how the XinoRF works is that it operates in an event-driven way — i.e. you don’t need to poll to check the state of the pin, it transmits when there is a change in the state from “low” to “high” or vice versa. The other XinoRF pins perform different functions in different ways, but I thought I’d show how to at least send and receive so others can tailor to their scenario.

I’m assuming here that folks are familiar with the NodeRED concepts, so will get straight into the meat of how it works.

Configuring A serial input node to Receive llap packets into a flow

The first thing to do is configure the serial input node in NodeRED. The settings for my node are shown below.

Screen Shot 2014-10-14 at 17.56.37

You can confirm the serial port name using the RasWIK documentation and tools. A particular point to note is the Split input settings which determines at which point the input node offers up data to the next node in the flow. This can be set to a delimiter, or set to a timeout, wherein if no data is received within a certain period, the transmission is considered complete. I discovered that the latter timeout approach worked with a setting of 250 milliseconds, though it does introduce the possibility of multiple LLAP packets being received in one go, which we now need to address within the next node in our flow.

Creating a function node to segment multiple llap packets into individual messages

For greatest efficiency want all subsequent nodes in the flow to be designed to ingest a single LLAP packet in a single message. To ensure this downstream behaviour is honoured we create a function node after our serial input node to check for multiple packets and segment them into individual messages if so. The way NodeRED is designed to work, we can return an array of messages and it will cycle through each message in turn, invoking the remainder of the flow as if triggered by a single message.

The first thing we do in our function node is test for the simple case of a single packet which we can test by the length of the payload, LLAP packets being 12 characters in length.

var instring = msg.payload;
if (instring.length<=12) {
   console.log("Single packet: "+instring);
   return msg;

If the length of the payload is longer, it means we have multiple packets so we need to iterate over the payload, extracting each packet into an array of new messages to return at the end of the function.

else {
   var packets = [];
   console.log("Multiple packets: "+instring);
   while (instring.length>=12) {
      var packet = instring.substr(0, 12);
      console.log("Unspliced packet: "+packet);
      packets[packets.length] = {payload:packet};
      instring = instring.substr(12);
      console.log("instring is now length "+instring.length);
   return [packets]; 
return null;

An important point to note (highlighted above) is that although we have created an array of payload objects, NodeRED requires that we in turn encase that array in another array when we return it for processing downstream.

Parsing the LLAP packet

In our subsequent function node, we can now parse the single LLAP packet to determine whether we have a “high” event for the D03 pin we’re interested in. The format of an LLAP packet in this case would be:


Our code simply needs to check the packet that is received, and parse it see if it corresponds to the above event. Note that in a simple scenario we could just do a direct string comparison between what we’ve received and a template packet such as that above, but if we have multiple stations (denoted by characters 1 and 2, “–” in the example) then we might have more complex flow control logic. My function node simply emits the word “HIGH” if we’ve got a D03 “high” event from the XinoRF.

var thePayload = msg.payload;
var pin = thePayload.substring(3,6);
if (pin == "D03") {
   // Now test if it was HIGH
   if (thePayload.substring(6,10) == "HIGH") {
      console.log("HIGH on D03");

Now in my scenario, the switch (pressure mat) could toggle on and off in quick succession which would technically send a number of LLAP packets back to the Pi. Within a certain time window, I’m only interested in a single “high” event, so have added some logic using NodeRED’s context to test when I last received a “high” event and only taking action if it was more than a minute ago.

      var timeNow = new Date();
      console.log("timeNow is "+timeNow);
      if (context.lastHigh) {
         console.log("context.lastHigh is "+context.lastHigh);
         var lastTimeMillis = context.lastHigh.getTime();
         var currentTimeMillis = timeNow.getTime();
         var delta = currentTimeMillis - lastTimeMillis;
         if (delta < 60000) { 
            console.log("Less than a minute since last HIGH: "+
               delta+" ms.");
            return null;
         console.log("Over a minute, actually "+delta+" ms.");
      } else {
      context.lastHigh = timeNow;
      return {payload: "HIGH"}
return null;

So, if we get a “high” event and we haven’t seen one for at least a minute, the subsequent nodes in the flow will receive a packet with a payload of “HIGH”.

And with that we’re free to continue our processing, which could be as simple as a debug node. Here’s a snapshot of my NodeRED workspace — in this case I’m using the “high” event to trigger a Tweet.

Screen Shot 2014-10-14 at 18.33.17



Configuring the Raspberry Pi to share a Mac’s internet connection

One of the great things you can do with a Mac is share your internet connection, and I’ve found this particularly invaluable for doing development with my Raspberry Pi, and especially when I want to take it somewhere to show people. Normally it connects to my home router where I can configure the router to have a static IP address so I know where to find it and off I go, not so easy when you’re mobile.

There are a number of articles out there claiming to show you how to make it all work when you Google for it, but it still took me a fair bit of puzzling and piecing of the bits together to come up with something that worked. So, to save you that trouble, I’m recording my configuration here.

First set up your ethernet adapter

The mechanism I’m using to connect my Mac (Macbook Pro Retina running OS X Mavericks 10.9.5) is via good old Ethernet. For this you’ll need the (rather expensive) Thunderbolt to Ethernet adapter if your Macbook (like mine) doesn’t have an onboard Ethernet socket. You can get cheaper non-Apple alternatives but I’ve no experience of whether they work or not. Once plugged in, open up System Preferences, and then Network. You’ll notice your Ethernet port appears on the list of adapters on the left hand side. You can give the configuration a name of its own, I’ve imaginatively called mine “Raspberry Pi Ethernet (home)” as you can see below.

Screen Shot 2014-10-14 at 15.49.52Now of course DHCP would be fine, and whilst you can probably guess at which IP the Mac’s DHCP server will give you each time, it’s more predictable to have a static IP defined, particularly when running your Pi in “headless” mode. There are therefore some settings we need to set up to make this hang together, not least a static IP configuration for the Mac as it will appear to the Pi so we can use it as our IP gateway. As in the above picture, use the following settings:

  • Configure IPv4: Manually
  • IP Address:
  • Subnet mask:

Because we’re using a static configuration, we now need to set the DNS server settings ourselves. Without any DNS servers set, the Ethernet connection (and therefore the Pi) will not be able to resolve any of the domain names we throw at it.

Click the Advanced… button, then the DNS tab and then the + button to add a DNS server entry.

Screen Shot 2014-10-14 at 15.59.09

You can simply lift the DNS settings from your Wifi configuration (provided of course your wifi is already up and happily resolving!) or indeed use some public DNS server settings. I’ve got a mix of public DNS, a couple from work and my home router. Click OK when you’re done to save the settings.


Having set up the Ethernet port, we now need to tell the Mac we want to share the wifi internet connection with devices connected via Ethernet. It couldn’t be easier — back to System Preferences, select Sharing and tick Internet Sharing. Select Wi-fi and Thunderbolt Ethernet as the points where you want to share from and to as per the screenshot below.

Screen Shot 2014-10-14 at 16.06.29



I was surprised how resilient the local configuration between the Mac and the Pi was when I was fiddling with these next steps. Getting the Mac and the Pi to see each other is the easy bit, the harder bit was making the Pi be able to see the internet beyond. Apple have made things easier still by making the Ethernet adapter work out when you’re using it to connect to another machine and automatically flipping into “crossover” mode so you can connect the Pi with bog standard Ethernet cable. Connect the Pi to the Mac’s Ethernet adapter and start it up.

First time of asking the Pi will be assigned an IP of by the Mac’s DHCP server (assuming your Pi by default is set up for DHCP from the Ethernet port). I’ve not tested the theory of whether that ever changes, but to make absolutely sure (and certainty is definitely preferred when doing demos) we will configure the Ethernet adapter with static settings based on the Mac’s configuration. Once the Pi is started, connect via SSH from the Mac and once logged in and on the command prompt, enter the following command:

sudo vim.tiny /etc/resolv.conf

This will open the Vim editor (same commands as Vi). You want a single line it as follows:


This tells the Pi to use the Mac as its DNS server, rather like you might expect with a connection to a home router.
Having saved the changes, we now need to set the static configuration for the Ethernet port. We’ll use Vim again, this time enter the following command:

sudo vim.tiny /etc/network/interfaces

This time there are a few more lines to add. If you want to make sure you don’t forget your existing Ethernet settings, you can comment them out with a hash. Otherwise, your settings should look as follows:

auto eth0
iface eth0 inet static

This tells the Pi to adopt the address permanently using the Mac as the IP gateway with corresponding subnet mask.

You’re now all set. For good measure, restart the Pi and all the changes should stick.

The Proof in the PUDDING (or PI)

In my case I’ve been using TightVNC to access the Pi’s desktop, so as a crude test that everything was hunky dory I simply open up my Midori browser and try and hit the public internet. For posterity, everything looks as it should when I try and hit this blog:

Screen Shot 2014-10-14 at 16.29.13





Simple database operations via HTTP with Node JS, MongoDB and Bluemix

In my previous posting I mentioned how I’d planned to harvest some of the useful fragments from a home project that might be useful to others. This time I thought I’d capture the simple mechanism I created to keep an audit database for my application using MongoDB running on Bluemix.

I opted for Mongo due to its simplicity and close integration with JavaScript — a perfect option for creating something quickly and easily to run in a Node JS environment. Mongo is a NoSQL database, meaning that you don’t have to define a specific schema for data, you simply store what you need as a JSON object in the form of key/value pairs. This means you can store and retrieve a wide variety of different data objects in the same database, and aren’t constrained by decisions made early on in the project if your needs change. Whilst it wasn’t a design point of mine, Mongo also is designed to scale too.

As described previously, I’m using the Node JS Web Starter boilerplate as my starting point. I’ve previously added the Twilio service, now to add Mongo, I simply select the MongoLab service from the Data Management set of services on Bluemix console, and add it to my application.

Screen Shot 2014-08-06 at 15.55.50When you create the MongoLab service for the app, Bluemix provides a link to the MongoLab admin page. The nice thing about MongoLab as a service provider is that it gives you nice user friendly tools for creating Collections, reviewing documents etc. I created a collection in there called easyFiddle using the web console.

Screen Shot 2014-08-06 at 16.01.55

Having configured the Mongo Collection, the next step is to make sure that the Mongo libraries are available to the Node JS environment. As with Twilio before, we simply make sure we have an entry in the package.json file.

   "name": "NodejsStarterApp",
   "version": "0.0.1",
   "description": "A sample nodejs app for BlueMix",
   "dependencies": {
      "mongodb": "1.4.6",
      "express": "3.4.7",
      "twilio": "^1.6.0",
      "jade": "1.1.4"
   "engines": {
      "node": "0.10.26"
   "repository": {}

Just as before, Bluemix will handle the installation of the packages for us when we push the updates to the server.

Within our code, we now need to instantiate the Mongo driver objects with the credentials generated by Bluemix for the MongoDB instance running in MongoLabs. Bluemix supplies the credentials for connection via the VCAP_SERVICES environment variable.

var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
var mongo = services['mongolab'][0].credentials;

We will reference the mongo object to retrieve the credentials when we connect to the database.

As I did with Twilio, I am using a simple HTTP-based service that will in this case create an audit record in a database. I’m using Express again (as described previously), together with the same basic authentication scheme. My service works on HTTP GET requests to /audit with two query parameters device and event.

// Leave out the auth parameter if you're not using an 
// authentication scheme
app.get('/audit', auth, function(req, res) {
   var theDevice = req.param("device");
   var theEvent = req.param("event");

Now it’s a case of connecting to Mongo, and inserting a JSON object as a document to contain the two parameters.

   mongodb.MongoClient.connect(mongo.uri, function(err,db) {
      // We'd put error handling in here -- simply check if 
      // err is set to something
      var collection = db.collection('easyFiddle');
      var doc = {
         "device": theDevice, 
         "event": theEvent, 
         "date": new Date()
      collection.insert(doc, {w:1}, function(err, result) {
         // Again, we'd put error handling in here

And that’s it. We can now create an audit entry using our browser if we choose, with a URL that looks like:

I’ve added other services using the same method to variously query and delete all the records in the Collection. Whilst I’ll not include them all here, note that the syntax for deleting all the records in a collection is a bit non-obvious — the examples show you how to delete a record matching a given key/value pair, but are less clear on how to delete them all. You do so simply by supplying a null instead of a name/value pair the method call:

collection.remove(null, {w:1}, function(err, result) {
   // Error handling
   // ...

Note that the result variable will contain the number of records deleted.

Hopefully this posting has helped get you going anyway. A great resource to help you navigate your way around the Node JS API for Mongo can be found in the MongoDB documentation.

Sending SMS messages using Twilio and Bluemix

I’ve been tinkering with an Internet of Things project at home for a while which I’ll write up in due course, but in the course of doing so have knocked up a few useful fragments of function that I thought I’d share in case other people need them. The first of these is a simple Node.js app to send an SMS message via Twilio using IBM Bluemix.

There’s lots of material on Twilio and Bluemix but by way of a very rapid summary, Twilio provide nice, friendly APIs over telephony-type services (such as sending SMS messages), and Bluemix is IBM’s Cloud Foundry-based Platform-as-a-Service offering to enable developers to build applications rapidly in the cloud. Twilio have created a service within Bluemix that developers can pick up and use to enable their applications with the Twilio services. One of the things I wanted for my application was a way of notifying me that something had happened, and a text message suited my needs nicely. Twilio provide a free trial service with a few restrictions which you can upgrade once you want to do something serious.

To begin with, I created myself a Node application on Bluemix using the Node JS Web Starter application boilerplate provided:

My approach was to create a simple HTTP-based service that I could invoke with the destination phone number, and the message itself as parameters. To make the Twilio service available to my Node application, it was simply case of adding the service to my application in Bluemix. Twilio is listed as one of the Mobile services:

Once you have added the Twilio service, you configure it in Bluemix by providing the Account SID and Auth Token values that you find on the account details page once you have registered and logged in to Twilio.

The Node JS Web Starter boilerplate creates a simple template for a web server that serves up pages using the Express framework on top of Node. Express is handy, in that it provides a useful framework for handling HTTP requests, so I decided to stick with it for my HTTP service. The first change I needed to make to the boilerplate was to add a reference to Twilio in the package.json file so that the modules would be available to my code.

   "name": "NodejsStarterApp",
   "version": "0.0.1",
   "description": "A sample nodejs app for BlueMix",
   "dependencies": {
      "mongodb": "1.4.6",
      "express": "3.4.7",
      "twilio": "^1.6.0",
      "jade": "1.1.4"
   "engines": {
      "node": "0.10.26"
   "repository": {}

When you push your updated code to Bluemix, Bluemix automatically does the npm install to go and fetch the modules based on the package.json.

Within the app, you then need to set up the Twilio package ready for sending messages. First, we need to require the Twilio package so we can access the service from our code, and then retrieve the Account SID and Auth Token values configured in Bluemix from the VCAP_SERVICES environment variable that Bluemix provides to the Node runtime.

var twilio = require('twilio'); // Twilio API
var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
var twilioSid, twilioToken;
services['user-provided'].forEach(function(service) {
   if ( == 'Twilio-ph') { 
      twilioSid = service.credentials.accountSID;
      twilioToken = service.credentials.authToken;

Note that Twilio-ph is the name I gave to the Twilio service when I added it to my application in Bluemix, yours may vary so remember to change it if different.

The environment is now set up, so now we need to create our HTTP handler using Express to form the basis of our service.  I’ve added Basic Authentication to my handler to prevent spam to my Twilio account, this is nice and easy to do using Express.

// Configured environment variables to protect Twilio requests
var USER = process.env.USER;
console.log("USER = "+USER);
var PASSWORD = process.env.PASSWORD;
console.log("PASSWORD = "+PASSWORD);

// Basic authentication to restrict access to my services.
var auth = express.basicAuth(USER, PASSWORD);

I’ve used environment variables that I’ve set in the Bluemix environment, clearly in a production environment one would use a proper directory. You can set your own environment variables within your application by going to the Runtime view of your application and selecting the USER-DEFINED button on the main panel.

The HTTP handler simply looks for the URI pattern /twilio as a GET request and reads the destination telephone number and the message content as query parameters. The auth object passed in applies the Basic Authentication rule defined previously.

app.get('/twilio', auth, function(req, res) {
   var toNum = req.param("number");
   var twilioMessage = req.param("message");
   var fromNum = 'your Twilio number';

Your Twilio number can be found on the Twilio account details page.

Twilio make it really easy to send a message using their twilio.RestClient API wrapper class. You simply instantiate an instance of the twilio.RestClient class, and invoke the sendMessage method with a JSON object containing parameters describing who the message is to, the number it is sent from and the message to be included in the SMS. You provide a callback function that is invoked when the request is completed.

   var client = new twilio.RestClient(twilioSid, twilioToken);
         to: toNum,
         from: fromNum, 
         body: twilioMessage
      function(err, message) {
         if (err) {
            console.error("Problem: "+err+": "+message);
            res.send("Error! "+err+": "+message);
         } else {
            console.log("Twilio message sent from "+fromNum+
            " to "+toNum+": "+twilioMessage);

Once deployed, the service can be invoked with a URL in the form described below:

If invoked programmatically the authentication credentials will be required in the HTTP header. If tried from a browser, the browser will prompt for a username and password combination. Ultimately you’ll receive an SMS on your phone:
twilio sample
And there it is. I can now trigger SMS messages either from my NodeRED flows, browser or any other apps I might write.

Fixing jQuery Mobile headers in a Worklight app on iOS 7

One of the fun bits (depending on your inclination) of working cross-platform is discovering and mitigating the nuanced differences as you try your app on different devices. One such difference in iOS 7 is the transparency of the iOS status bar that contains the wifi strength, battery life and so on.

iOS header area

If you’re not including a header in your app then this won’t make a whole lot of difference to you, but if you are, you’ll find the iOS status bar overlays your header which can mess up your carefully placed buttons, iconography and header text.

I’ve come up with a simple workaround for jQuery Mobile running in a Worklight environment that I’ve posted here for ease of reuse and in case anybody else is looking for similar. The same principle should apply equally in a vanilla Cordova app too.

My example uses a simple jQuery Mobile header on a page.

<div data-role="page" id="mypage">
   <div data-role="header" data-position="fixed" 
      <a href="#home" 
class="ui-btn ui-icon-back ui-btn-icon-notext ui-corner-all"
      <h1>My heading</h1>
   <div data-role="main" class="ui-content">
      Blah blah

The overlap of the status bar is 20 points, so when the app renders we need to first detect whether we’re on a version of iOS that needs adjusting, then fix the elements contained in the header to allow for the status bar height.
For the purposes of demonstration I’ve simplified the below just to test for the iOS version assuming an Apple device, but of course you can add further tests for other platforms.

//... head contents
function onDeviceReady() {
   if (parseFloat(window.device.version) >= 7.0) {
      $('H1').each(function() {
         // `this` is the h1, the padding goes on the
         // containing header div.
         $(this).parent().css("padding-top", "20px");
         // sort any buttons/icons rendered from A tags too
         $(this).siblings('A').css("margin-top", "20px");
// Fire when Cordova is up and ready
                           onDeviceReady, false);
//... rest of head contents

The logic of script searches for h1 tags on the assumption that they will be used for header text. If your interface is styled differently, you might want to come up with a different “eye-catcher” tag or attribute so jQuery can select all the nodes for the headers in the app. Having found the h1 tags, it then adjusts the containing div nodes to pad out from the top by the required amount. My application has a tags for buttons in the header, which a bit of experimentation showed were not being adjusted along with the containing div, so I’ve adjusted them directly.

Notice that I’ve used CSS padding for the h1 – this means that the additional offsetting will be filled with the natural background scheme of the header, rather than a blank white oblong which would occur if margin were used. The jQuery icons for my back link get distorted by tinkering with the padding, so I’ve used margin which works just fine as they layer over the top of the heading colour scheme.

Comparing data from two consecutive events in WebSphere Business Events

A commonly occurring event processing scenario concerns the situation where we have two consecutive events, and we wish to compare a given value between the two of them. Consider a scenario where we have a smart metering solution and we want to make sure that the meter is functioning properly. We might reasonably want check that the reading we just received is not less than the previous reading as an indicator that the meter is operating properly. This posting describes the WebSphere Business Events (WBE) logic I created to implement this scenario.

For this scenario we are primarily concerned with one event, Data arrived received from a Meter touchpoint defined in WBE.

The summary instance

In WBE, we have two possible options for comparing values in the current event with those of a previous instance of the same event. We can use an accumulating array to build a list of data received as each event arrives. This is useful for situations where we are interested to examine data gathered across a number of received events. In this scenario, we can use an alternative approach and use a summary instance intermediate object.

The summary instance is a mechanism for sharing state data across two events received by WBE. Rather than store state in the form of a growing list of data items, the summary instance is one single instance that can be updated with each received event. This approach is useful when we want to maintain simpler state information, such as a running total or in this scenario the previous value we received for the given meter reading.

In this scenario, we simply wish to maintain the previous meter reading alongside the current meter reading such that we can subsequently create an interaction set that compares the two values in WBE Design. The summary instance intermediate object will be visible within the WBE Design tool, and so the business user is free to define exactly how they wish to compare the two values using a filter.

We begin by creating an intermediate object ReadingTracker just as we would normally with two fields, currentReading and previousReading, both of data type Real.


We now then modify the scope of this object to be summary instance by right-clicking and selecting Intermediate Object Properties… then navigating to the Object tab on the dialog box.


Using the drop down box provided, we simply set the Context scope: field to be Summary Instance and click OK.

Now we have created the summary instance object, we now need to manipulate its contents. We do this using a field constructor on the source event object.

Populating the summary instance from the event

Having created an intermediate object that will persist across multiple events, the next step is to create the logic to populate the fields when the events are received. In the simplest case, we map a field in the event object with a corresponding field in an intermediate object, for example the unique identifier for the meter. In WBE, we can add more sophisticated logic to control exactly how the contents of an intermediate object are populated. This is accomplished using a field constructor and fragments of JavaScript. The field constructor defines how the fields of the intermediate objects that are used to evaluate the business logic in interaction sets are constructed. By embedding fragments of JavaScript we are able to inject programmatic logic at this point before any of the business logic is computed by WBE.

In this scenario, we will use this mechanism to update and maintain the summary instance variable when we receive a Data arrived event such that when our interaction set is computed by WBE, the previousReading and currentReading fields accurately reflect the previous and current values returned from the meter. The source event in this case defines a single event object field reading.


We associate the currentReading field in our ReadingTracker intermediate object with the reading field in our event object as normal simply using drag/drop to join the two fields together. For the previousReading field, we will add some JavaScript logic that will return the previous reading we had stored in our intermediate object, or zero if this is the first reading we have received. To do this, we add a field constructor for the previousReading field.


We now see that the field has been added and we can use the twistie to expand on the detail of how the field is to be constructed. Since  we wish to add JavaScript logic to compute this field, we will select JavaScript from the drop down list for the Type: field.


The JavaScript logic we need evaluates the correct value to return for the previousReading field. The logic we enter into the Expression field is as follows:

var dblReturn = 0;
if (ReadingTracker.currentReading != null) {
   dblReturn = ReadingTracker.currentReading;


This logic checks the existing summary instance to see if it has a currentReading value set. If so, this will be the reading gathered from the previous event, and hence we now return this as the value of previousReading since in the context of the currrent event, this is the previous reading. This is the key to the logic: because we are examining the value of currentReading during construction of the fields, we know that any state in our summary instance must be what remained from last time.

Note that we also have a safety guard in there that says if the summary instance has currentReading set to null, then we return zero since we have not had a reading before.

To complete the picture, we can other computed fields in our intermediate objects that make use of these values. In this case we are going to offer a readingDifference field alongside the readingValue field on our Reading intermediate object that will offer the business user the difference between the recent and previous values.


Sample interaction set and filter

Using the fields created previously we can now construct an interaction set and filters to make use of them. In this sample we are checking for meter tampering in the form of a decreasing meter reading.


The Meter reading decreasing filter simply checks where the readingDifference field is relative to zero – if less than zero, the more recent reading was less than the previous reading.