Parsing LLAP packets using NodeRED on the Raspberry Pi

It’s taken a few weeks but here is the next instalment in my promised series of articles. Part of the scenario requires me to detect a particular scenario using a remote sensor and relay it on to a variety of other systems via my Raspberry Pi. The sensor itself is assembled using the rather brilliant Ciseco Wireless Inventor’s Kit which provides all you need for prototyping remote sensors and transmitting data over a short range radio link back to a waiting Pi.

It’s a pretty simple yet powerful set of kit. You build your sensor using the breadboard and XinoRF module supplied which communicates with a Slice of Radio transceiver module you attach to the Pi’s GPIO pins. Your applications then send and receive data to and from the remote sensor via the Pi’s serial port and you’re away. The protocol used for the serial communications is designed by Ciseco and is called Lightweight Logical Application Protocol (LLAP). It takes the form of a simple fixed-field text datagram and as such can be parsed by pretty well any programming language that can interrogate the serial port.

In my case I’m using NodeRED to parse the LLAP packets so I can invoke other services in response to an event triggered via a sensor. The event is detected by a switch (a pressure mat in fact) that sets the D03 pin on the XinoRF module to be “high”. The nice thing about how the XinoRF works is that it operates in an event-driven way — i.e. you don’t need to poll to check the state of the pin, it transmits when there is a change in the state from “low” to “high” or vice versa. The other XinoRF pins perform different functions in different ways, but I thought I’d show how to at least send and receive so others can tailor to their scenario.

I’m assuming here that folks are familiar with the NodeRED concepts, so will get straight into the meat of how it works.

Configuring A serial input node to Receive llap packets into a flow

The first thing to do is configure the serial input node in NodeRED. The settings for my node are shown below.

Screen Shot 2014-10-14 at 17.56.37

You can confirm the serial port name using the RasWIK documentation and tools. A particular point to note is the Split input settings which determines at which point the input node offers up data to the next node in the flow. This can be set to a delimiter, or set to a timeout, wherein if no data is received within a certain period, the transmission is considered complete. I discovered that the latter timeout approach worked with a setting of 250 milliseconds, though it does introduce the possibility of multiple LLAP packets being received in one go, which we now need to address within the next node in our flow.

Creating a function node to segment multiple llap packets into individual messages

For greatest efficiency want all subsequent nodes in the flow to be designed to ingest a single LLAP packet in a single message. To ensure this downstream behaviour is honoured we create a function node after our serial input node to check for multiple packets and segment them into individual messages if so. The way NodeRED is designed to work, we can return an array of messages and it will cycle through each message in turn, invoking the remainder of the flow as if triggered by a single message.

The first thing we do in our function node is test for the simple case of a single packet which we can test by the length of the payload, LLAP packets being 12 characters in length.

var instring = msg.payload;
if (instring.length<=12) {
   console.log("Single packet: "+instring);
   return msg;
} 

If the length of the payload is longer, it means we have multiple packets so we need to iterate over the payload, extracting each packet into an array of new messages to return at the end of the function.

else {
   var packets = [];
   console.log("Multiple packets: "+instring);
   while (instring.length>=12) {
      var packet = instring.substr(0, 12);
      console.log("Unspliced packet: "+packet);
      packets[packets.length] = {payload:packet};
      instring = instring.substr(12);
      console.log("instring is now length "+instring.length);
   }
   return [packets]; 
}
return null;

An important point to note (highlighted above) is that although we have created an array of payload objects, NodeRED requires that we in turn encase that array in another array when we return it for processing downstream.

Parsing the LLAP packet

In our subsequent function node, we can now parse the single LLAP packet to determine whether we have a “high” event for the D03 pin we’re interested in. The format of an LLAP packet in this case would be:

a--D03HIGH--

Our code simply needs to check the packet that is received, and parse it see if it corresponds to the above event. Note that in a simple scenario we could just do a direct string comparison between what we’ve received and a template packet such as that above, but if we have multiple stations (denoted by characters 1 and 2, “–” in the example) then we might have more complex flow control logic. My function node simply emits the word “HIGH” if we’ve got a D03 “high” event from the XinoRF.

var thePayload = msg.payload;
var pin = thePayload.substring(3,6);
if (pin == "D03") {
   // Now test if it was HIGH
   if (thePayload.substring(6,10) == "HIGH") {
      console.log("HIGH on D03");

Now in my scenario, the switch (pressure mat) could toggle on and off in quick succession which would technically send a number of LLAP packets back to the Pi. Within a certain time window, I’m only interested in a single “high” event, so have added some logic using NodeRED’s context to test when I last received a “high” event and only taking action if it was more than a minute ago.

      var timeNow = new Date();
      console.log("timeNow is "+timeNow);
		
      if (context.lastHigh) {
         console.log("context.lastHigh is "+context.lastHigh);
         var lastTimeMillis = context.lastHigh.getTime();
         var currentTimeMillis = timeNow.getTime();
         var delta = currentTimeMillis - lastTimeMillis;
         if (delta < 60000) { 
            console.log("Less than a minute since last HIGH: "+
               delta+" ms.");
            return null;
         }
         console.log("Over a minute, actually "+delta+" ms.");
      } else {
         console.log("Continuing.");
      }
      context.lastHigh = timeNow;
      return {payload: "HIGH"}
   }
}
return null;

So, if we get a “high” event and we haven’t seen one for at least a minute, the subsequent nodes in the flow will receive a packet with a payload of “HIGH”.

And with that we’re free to continue our processing, which could be as simple as a debug node. Here’s a snapshot of my NodeRED workspace — in this case I’m using the “high” event to trigger a Tweet.

Screen Shot 2014-10-14 at 18.33.17

 

 

Configuring the Raspberry Pi to share a Mac’s internet connection

One of the great things you can do with a Mac is share your internet connection, and I’ve found this particularly invaluable for doing development with my Raspberry Pi, and especially when I want to take it somewhere to show people. Normally it connects to my home router where I can configure the router to have a static IP address so I know where to find it and off I go, not so easy when you’re mobile.

There are a number of articles out there claiming to show you how to make it all work when you Google for it, but it still took me a fair bit of puzzling and piecing of the bits together to come up with something that worked. So, to save you that trouble, I’m recording my configuration here.

First set up your ethernet adapter

The mechanism I’m using to connect my Mac (Macbook Pro Retina running OS X Mavericks 10.9.5) is via good old Ethernet. For this you’ll need the (rather expensive) Thunderbolt to Ethernet adapter if your Macbook (like mine) doesn’t have an onboard Ethernet socket. You can get cheaper non-Apple alternatives but I’ve no experience of whether they work or not. Once plugged in, open up System Preferences, and then Network. You’ll notice your Ethernet port appears on the list of adapters on the left hand side. You can give the configuration a name of its own, I’ve imaginatively called mine “Raspberry Pi Ethernet (home)” as you can see below.

Screen Shot 2014-10-14 at 15.49.52Now of course DHCP would be fine, and whilst you can probably guess at which IP the Mac’s DHCP server will give you each time, it’s more predictable to have a static IP defined, particularly when running your Pi in “headless” mode. There are therefore some settings we need to set up to make this hang together, not least a static IP configuration for the Mac as it will appear to the Pi so we can use it as our IP gateway. As in the above picture, use the following settings:

  • Configure IPv4: Manually
  • IP Address: 192.168.2.1
  • Subnet mask: 255.255.255.0

Because we’re using a static configuration, we now need to set the DNS server settings ourselves. Without any DNS servers set, the Ethernet connection (and therefore the Pi) will not be able to resolve any of the domain names we throw at it.

Click the Advanced… button, then the DNS tab and then the + button to add a DNS server entry.

Screen Shot 2014-10-14 at 15.59.09

You can simply lift the DNS settings from your Wifi configuration (provided of course your wifi is already up and happily resolving!) or indeed use some public DNS server settings. I’ve got a mix of public DNS, a couple from work and my home router. Click OK when you’re done to save the settings.

NOW TURN ON INTERNET SHARING

Having set up the Ethernet port, we now need to tell the Mac we want to share the wifi internet connection with devices connected via Ethernet. It couldn’t be easier — back to System Preferences, select Sharing and tick Internet Sharing. Select Wi-fi and Thunderbolt Ethernet as the points where you want to share from and to as per the screenshot below.

Screen Shot 2014-10-14 at 16.06.29

 

NOW CONNECT AND CONFIGURE THE PI

I was surprised how resilient the local configuration between the Mac and the Pi was when I was fiddling with these next steps. Getting the Mac and the Pi to see each other is the easy bit, the harder bit was making the Pi be able to see the internet beyond. Apple have made things easier still by making the Ethernet adapter work out when you’re using it to connect to another machine and automatically flipping into “crossover” mode so you can connect the Pi with bog standard Ethernet cable. Connect the Pi to the Mac’s Ethernet adapter and start it up.

First time of asking the Pi will be assigned an IP of 192.168.2.2 by the Mac’s DHCP server (assuming your Pi by default is set up for DHCP from the Ethernet port). I’ve not tested the theory of whether that ever changes, but to make absolutely sure (and certainty is definitely preferred when doing demos) we will configure the Ethernet adapter with static settings based on the Mac’s configuration. Once the Pi is started, connect via SSH from the Mac and once logged in and on the command prompt, enter the following command:

sudo vim.tiny /etc/resolv.conf

This will open the Vim editor (same commands as Vi). You want a single line it as follows:

nameserver 192.168.2.1

This tells the Pi to use the Mac as its DNS server, rather like you might expect with a connection to a home router.
Having saved the changes, we now need to set the static configuration for the Ethernet port. We’ll use Vim again, this time enter the following command:

sudo vim.tiny /etc/network/interfaces

This time there are a few more lines to add. If you want to make sure you don’t forget your existing Ethernet settings, you can comment them out with a hash. Otherwise, your settings should look as follows:

auto eth0
iface eth0 inet static
address 192.168.2.2
netmask 255.255.255.0
gateway 192.168.2.1

This tells the Pi to adopt the 192.168.2.2 address permanently using the Mac as the IP gateway with corresponding subnet mask.

You’re now all set. For good measure, restart the Pi and all the changes should stick.

The Proof in the PUDDING (or PI)

In my case I’ve been using TightVNC to access the Pi’s desktop, so as a crude test that everything was hunky dory I simply open up my Midori browser and try and hit the public internet. For posterity, everything looks as it should when I try and hit this blog:

Screen Shot 2014-10-14 at 16.29.13

 

 

 

 

Simple database operations via HTTP with Node JS, MongoDB and Bluemix

In my previous posting I mentioned how I’d planned to harvest some of the useful fragments from a home project that might be useful to others. This time I thought I’d capture the simple mechanism I created to keep an audit database for my application using MongoDB running on Bluemix.

I opted for Mongo due to its simplicity and close integration with JavaScript — a perfect option for creating something quickly and easily to run in a Node JS environment. Mongo is a NoSQL database, meaning that you don’t have to define a specific schema for data, you simply store what you need as a JSON object in the form of key/value pairs. This means you can store and retrieve a wide variety of different data objects in the same database, and aren’t constrained by decisions made early on in the project if your needs change. Whilst it wasn’t a design point of mine, Mongo also is designed to scale too.

As described previously, I’m using the Node JS Web Starter boilerplate as my starting point. I’ve previously added the Twilio service, now to add Mongo, I simply select the MongoLab service from the Data Management set of services on Bluemix console, and add it to my application.

Screen Shot 2014-08-06 at 15.55.50When you create the MongoLab service for the app, Bluemix provides a link to the MongoLab admin page. The nice thing about MongoLab as a service provider is that it gives you nice user friendly tools for creating Collections, reviewing documents etc. I created a collection in there called easyFiddle using the web console.

Screen Shot 2014-08-06 at 16.01.55

Having configured the Mongo Collection, the next step is to make sure that the Mongo libraries are available to the Node JS environment. As with Twilio before, we simply make sure we have an entry in the package.json file.

{
   "name": "NodejsStarterApp",
   "version": "0.0.1",
   "description": "A sample nodejs app for BlueMix",
   "dependencies": {
      "mongodb": "1.4.6",
      "express": "3.4.7",
      "twilio": "^1.6.0",
      "jade": "1.1.4"
   },
   "engines": {
      "node": "0.10.26"
   },
   "repository": {}
}

Just as before, Bluemix will handle the installation of the packages for us when we push the updates to the server.

Within our code, we now need to instantiate the Mongo driver objects with the credentials generated by Bluemix for the MongoDB instance running in MongoLabs. Bluemix supplies the credentials for connection via the VCAP_SERVICES environment variable.

var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
...
var mongo = services['mongolab'][0].credentials;

We will reference the mongo object to retrieve the credentials when we connect to the database.

As I did with Twilio, I am using a simple HTTP-based service that will in this case create an audit record in a database. I’m using Express again (as described previously), together with the same basic authentication scheme. My service works on HTTP GET requests to /audit with two query parameters device and event.

// Leave out the auth parameter if you're not using an 
// authentication scheme
app.get('/audit', auth, function(req, res) {
   var theDevice = req.param("device");
   var theEvent = req.param("event");

Now it’s a case of connecting to Mongo, and inserting a JSON object as a document to contain the two parameters.

   mongodb.MongoClient.connect(mongo.uri, function(err,db) {
      // We'd put error handling in here -- simply check if 
      // err is set to something
      var collection = db.collection('easyFiddle');
		
      var doc = {
         "device": theDevice, 
         "event": theEvent, 
         "date": new Date()
      };
		
      collection.insert(doc, {w:1}, function(err, result) {
         // Again, we'd put error handling in here
         res.json(doc);
      });			
   });
});

And that’s it. We can now create an audit entry using our browser if we choose, with a URL that looks like:

http://appname.mybluemix.net/audits?device=TEST&event=TESTING

I’ve added other services using the same method to variously query and delete all the records in the Collection. Whilst I’ll not include them all here, note that the syntax for deleting all the records in a collection is a bit non-obvious — the examples show you how to delete a record matching a given key/value pair, but are less clear on how to delete them all. You do so simply by supplying a null instead of a name/value pair the method call:

collection.remove(null, {w:1}, function(err, result) {
   // Error handling
   // ...
});

Note that the result variable will contain the number of records deleted.

Hopefully this posting has helped get you going anyway. A great resource to help you navigate your way around the Node JS API for Mongo can be found in the MongoDB documentation.

Sending SMS messages using Twilio and Bluemix

I’ve been tinkering with an Internet of Things project at home for a while which I’ll write up in due course, but in the course of doing so have knocked up a few useful fragments of function that I thought I’d share in case other people need them. The first of these is a simple Node.js app to send an SMS message via Twilio using IBM Bluemix.

There’s lots of material on Twilio and Bluemix but by way of a very rapid summary, Twilio provide nice, friendly APIs over telephony-type services (such as sending SMS messages), and Bluemix is IBM’s Cloud Foundry-based Platform-as-a-Service offering to enable developers to build applications rapidly in the cloud. Twilio have created a service within Bluemix that developers can pick up and use to enable their applications with the Twilio services. One of the things I wanted for my application was a way of notifying me that something had happened, and a text message suited my needs nicely. Twilio provide a free trial service with a few restrictions which you can upgrade once you want to do something serious.

To begin with, I created myself a Node application on Bluemix using the Node JS Web Starter application boilerplate provided:

My approach was to create a simple HTTP-based service that I could invoke with the destination phone number, and the message itself as parameters. To make the Twilio service available to my Node application, it was simply case of adding the service to my application in Bluemix. Twilio is listed as one of the Mobile services:

Once you have added the Twilio service, you configure it in Bluemix by providing the Account SID and Auth Token values that you find on the account details page once you have registered and logged in to Twilio.

The Node JS Web Starter boilerplate creates a simple template for a web server that serves up pages using the Express framework on top of Node. Express is handy, in that it provides a useful framework for handling HTTP requests, so I decided to stick with it for my HTTP service. The first change I needed to make to the boilerplate was to add a reference to Twilio in the package.json file so that the modules would be available to my code.

 
{
   "name": "NodejsStarterApp",
   "version": "0.0.1",
   "description": "A sample nodejs app for BlueMix",
   "dependencies": {
      "mongodb": "1.4.6",
      "express": "3.4.7",
      "twilio": "^1.6.0",
      "jade": "1.1.4"
   },
   "engines": {
      "node": "0.10.26"
   },
   "repository": {}
}

When you push your updated code to Bluemix, Bluemix automatically does the npm install to go and fetch the modules based on the package.json.

Within the app, you then need to set up the Twilio package ready for sending messages. First, we need to require the Twilio package so we can access the service from our code, and then retrieve the Account SID and Auth Token values configured in Bluemix from the VCAP_SERVICES environment variable that Bluemix provides to the Node runtime.

var twilio = require('twilio'); // Twilio API
...
var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
...
var twilioSid, twilioToken;
services['user-provided'].forEach(function(service) {
   if (service.name == 'Twilio-ph') { 
      twilioSid = service.credentials.accountSID;
      twilioToken = service.credentials.authToken;
   }
});

Note that Twilio-ph is the name I gave to the Twilio service when I added it to my application in Bluemix, yours may vary so remember to change it if different.

The environment is now set up, so now we need to create our HTTP handler using Express to form the basis of our service.  I’ve added Basic Authentication to my handler to prevent spam to my Twilio account, this is nice and easy to do using Express.

// Configured environment variables to protect Twilio requests
var USER = process.env.USER;
console.log("USER = "+USER);
var PASSWORD = process.env.PASSWORD;
console.log("PASSWORD = "+PASSWORD);

// Basic authentication to restrict access to my services.
var auth = express.basicAuth(USER, PASSWORD);

I’ve used environment variables that I’ve set in the Bluemix environment, clearly in a production environment one would use a proper directory. You can set your own environment variables within your application by going to the Runtime view of your application and selecting the USER-DEFINED button on the main panel.

The HTTP handler simply looks for the URI pattern /twilio as a GET request and reads the destination telephone number and the message content as query parameters. The auth object passed in applies the Basic Authentication rule defined previously.

app.get('/twilio', auth, function(req, res) {
   var toNum = req.param("number");
   var twilioMessage = req.param("message");
   var fromNum = 'your Twilio number';

Your Twilio number can be found on the Twilio account details page.

Twilio make it really easy to send a message using their twilio.RestClient API wrapper class. You simply instantiate an instance of the twilio.RestClient class, and invoke the sendMessage method with a JSON object containing parameters describing who the message is to, the number it is sent from and the message to be included in the SMS. You provide a callback function that is invoked when the request is completed.

   var client = new twilio.RestClient(twilioSid, twilioToken);
   client.sendMessage(
     {
         to: toNum,
         from: fromNum, 
         body: twilioMessage
      }, 
      function(err, message) {
         if (err) {
            console.error("Problem: "+err+": "+message);
            res.send("Error! "+err+": "+message);
            return;
         } else {
            res.send("Done!");
            console.log("Twilio message sent from "+fromNum+
            " to "+toNum+": "+twilioMessage);
         }
      }
   );
});

Once deployed, the service can be invoked with a URL in the form described below:

http://appname.mybluemix.net/twilio?number=number&message=mess

If invoked programmatically the authentication credentials will be required in the HTTP header. If tried from a browser, the browser will prompt for a username and password combination. Ultimately you’ll receive an SMS on your phone:
twilio sample
And there it is. I can now trigger SMS messages either from my NodeRED flows, browser or any other apps I might write.

Fixing jQuery Mobile headers in a Worklight app on iOS 7

One of the fun bits (depending on your inclination) of working cross-platform is discovering and mitigating the nuanced differences as you try your app on different devices. One such difference in iOS 7 is the transparency of the iOS status bar that contains the wifi strength, battery life and so on.

iOS header area

If you’re not including a header in your app then this won’t make a whole lot of difference to you, but if you are, you’ll find the iOS status bar overlays your header which can mess up your carefully placed buttons, iconography and header text.

I’ve come up with a simple workaround for jQuery Mobile running in a Worklight environment that I’ve posted here for ease of reuse and in case anybody else is looking for similar. The same principle should apply equally in a vanilla Cordova app too.

My example uses a simple jQuery Mobile header on a page.

<div data-role="page" id="mypage">
   <div data-role="header" data-position="fixed" 
      data-fullscreen="false">
      <a href="#home" 
class="ui-btn ui-icon-back ui-btn-icon-notext ui-corner-all"
>Back</a>
      <h1>My heading</h1>
   </div>
   <div data-role="main" class="ui-content">
      Blah blah
   </div>
</div>

The overlap of the status bar is 20 points, so when the app renders we need to first detect whether we’re on a version of iOS that needs adjusting, then fix the elements contained in the header to allow for the status bar height.
For the purposes of demonstration I’ve simplified the below just to test for the iOS version assuming an Apple device, but of course you can add further tests for other platforms.

<head>
//... head contents
<script>
function onDeviceReady() {
   if (parseFloat(window.device.version) >= 7.0) {
      $('H1').each(function() {
         // `this` is the h1, the padding goes on the
         // containing header div.
         $(this).parent().css("padding-top", "20px");
         // sort any buttons/icons rendered from A tags too
         $(this).siblings('A').css("margin-top", "20px");
      });
   }
}
// Fire when Cordova is up and ready
document.addEventListener("deviceready", 
                           onDeviceReady, false);
</script>
//... rest of head contents
</head>

The logic of script searches for h1 tags on the assumption that they will be used for header text. If your interface is styled differently, you might want to come up with a different “eye-catcher” tag or attribute so jQuery can select all the nodes for the headers in the app. Having found the h1 tags, it then adjusts the containing div nodes to pad out from the top by the required amount. My application has a tags for buttons in the header, which a bit of experimentation showed were not being adjusted along with the containing div, so I’ve adjusted them directly.

Notice that I’ve used CSS padding for the h1 – this means that the additional offsetting will be filled with the natural background scheme of the header, rather than a blank white oblong which would occur if margin were used. The jQuery icons for my back link get distorted by tinkering with the padding, so I’ve used margin which works just fine as they layer over the top of the heading colour scheme.

Mobile web frameworks, and other religious debates

It is an interesting litmus test of the maturity of any given technology trend as to when it starts to develop its own set of heated points of debate and argument. We had the “browser wars” of the late ‘90s, and then proprietary plug-ins versus AJAX/open web and more recently which AJAX framework is “best”.

The rise of mobile apps as a party that everyone wants to be at has further amplified this frameworks debate, as the focus has evolved from AJAX on the desktop to the mobile platforms. A quick Google and you’ll find any number of fora debating the merits of jQuery Mobile vs Dojo Mobile vs Sencha Touch and so on.

So there are several, which one is best then?

In fact participation in such debate in isolation is ultimately futile. That a particular topic becomes the subject of almost religious fervour in itself betrays that absolute truth either is either very hard or impossible to prove. They key to finding an answer is understanding the context. What is best for one situation may not be best for another, and to suggest otherwise would do the asker of the question a disservice, assuming they are asking for help.

There are a number of considerations though that can help navigate to what best might be.

You’re at the mercy of consumers

Technical debate is all fine and good, but in the mobile world, we know that consumers will decide the success or failure of the app. A poor experience by the end user will ultimately be its undoing. The framework must be able to meet the experience expected by the users. This is of course a key factor in determining whether a native or mobile web/hybrid approach is applicable in the first place, but that is another discussion entirely.

Don’t forget also that user experience and aesthetics are two different things – nice transitions or shading will never rectify a fundamentally flawed user experience. Rejecting a framework purely because it apparently contains less bundled eye candy than alternatives still may not mean you’ve chosen wisely.

A green field is increasingly rare

Even in the evolving world of mobile, it is increasingly likely that there will be some existing apps with which the new apps will have to live happily. A few things to consider might be:

  1. Is there already an incumbent framework?
  2. Is the existing framework capable of building what is required to the right quality in the given timescales?
  3. Are the developers effective using it?

If the answer to the above is a clean sweep of “Yes”, then unless there is a non-technical reason why the existing framework should be abandoned,then that probably suggests that sticking with what is there is the best option.

A hygiene factor for any technology decision, but an important consideration is the current position in the “marketplace” of a given framework. Is the framework under consideration acknowledged by other developers (and vendors) as strategic, or are references thin on the ground?

Skills matter

The accelerated lifecycle of the mobile world means that development time is at a premium. Adopting a framework or approach that is a closer match to the skills available within the organisation means greater opportunity for reuse both of assets and skills, and shortens the time required for developers to get up to speed. Related to the previous consideration, if there is an incumbent framework and the decision is made to replace it then selecting a replacement with some similar characteristics would make sense – e.g. script-centric vs markup centric.

It’s still a browser

The growth of AJAX as a technique in general has placed far greater expectation on the browser environment in terms of its criticality to the delivery of the application. It is easy to forget that for all the enhancement and development since the birth of the internet, fundamentally a browser renders documents, and Javascript is there to augment that core purpose. I’ve always been fairly sceptical of attempting to layer on more tiers of engineering into the browser than are absolutely necessary.

So when looking at the various frameworks,  it should be borne in mind that it’s not necessarily the same as a package selection exercise with enterprise software products. Looking at one framework for the UI, another for handling MVC, another for service invocation and so on may well be overcomplicating things unless that specific combination is absolutely the only way to deliver the experience. It is relatively straightforward, for example, to create a simple MVC framework within most mobile frameworks without introducing the complexity and bloat of yet another framework.

Horses for courses

And finally a variation on the consultant’s answer of “it depends”, but it is certainly true that choosing the right framework depends on what you want to do with it.

For example, I like prototyping using jQuery for its lightweight CSS/HTML-centric approach, whereas for construction of reusable components in an enterprise app I can see where the Dojo Toolkit with its Java-like packaging and widget framework has its strengths. That’s not to say you can’t prototype that way in Dojo or indeed create widgets in jQuery, just they each have different strengths depending on the use for me personally. So a key consideration here when evaluating a framework is determining what its core use is going to be – for example, do you need to make a strategic decision for a new service or are you looking to put something disposable together quickly? In the latter case, depending on skill levels some may choose not to use a framework at all.

Systems of Engagement 101

The emerging trend of Systems of Engagement is growing increasingly popular in the field of consumer and business applications and has been a frequently occurring topic of conversation for me recently with clients. There is an expanding body of materials on the subject, not least this excellent presentation from its originator Geoffrey Moore, but I wanted to capture my own quick snapshot in the form of a simple primer on the subject.

What are Systems of Engagement?

Systems of Engagement refer to a new generation of IT systems to support consumers and knowledge workers in the achievement of their objectives. Systems of Engagement optimise the effectiveness of the user by providing the required responsiveness and flexibility to deal with the fluidity of everyday life.

Haven’t we had these for a long time?

For many years, the types of applications organisations have invested in what are often referred to as Systems of Record, such as customer relationship management (CRM) tools and transactional consumer applications such as online banking applications. These tools clearly are beneficial, but at the same time have limitations since

  • they typically enable only a subset of the process to achieve real outcome desired, and
  • are constructed in terms of the provider’s world view, rather than the consumer’s.

For example online banking systems offer access to transactions and products, whereas the consumer’s overall objective might be something far more complex, such as moving house. Systems of Record support a model of interaction through sporadic, episodic transactions.

So why Systems of Engagement now?

Systems of Record are largely built out to the extent that they now offer a diminishing competitive advantage for organisations because most have now them. Cloud delivery models also mean that they are becoming increasingly commoditised, decreasing competitive return on investment even further. Systems of Record grew out of a time when differentiation was achieved through greater efficiency through IT systems. Consumer smartphones and social tools have created far higher expectations of what IT can deliver and this has shifted the emphasis for differentiation onto the systems that provide the greatest degree of effectiveness to the consumer. In contrast to Systems of Record, Systems of Engagement support a model of continuous interaction.

What are some attributes of Systems of Engagement?

Whilst opinions vary, the Harvard Business Review describes nine traits that define Systems of Engagement that I think serve as a good starting point:

  1. Design for sense and response.
  2. Address massive social scale.
  3. Foster conversation.
  4. Utilize a multitude of media styles for user experience.
  5. Deliver speed in real time.
  6. Reach to multi-channel networks.
  7. Factor in new types of information management.
  8. Apply a richer social orientation.
  9. Rely on smarter intelligence.

How are they constructed?

Clearly for systems such as that described above to be achievable, it follows that different technology is required to that of traditional Systems of Record. There are four major new technology trends that are key enablers for Systems of Engagement now and in the future:

  • Mobile devices that provide a ubiquitous entry point for the user wherever they are, and that can now provide richer context for the service provider (such as location) to offer better targeted services.
  • Social tools that provide “people integration” capabilities to glue together complex elements of the human workflow associated with achieving an outcome.
  • Analytics and Big Data to provide richer capabilities to engage with users with the benefit of a far broader supporting context, and proactively interact with the user with relevant beneficial services.
  • Cloud computing as a common delivery model for consuming services in a consistent way, wherever the user may be and from whichever device they choose. Cloud also enables organisations to move Systems of Record outside their premises and focus on differentiating Systems of Engagement.

Does this mean Systems of Record are obsolete?

Not by any means. Systems of Record have a key role to play since their efficiency and robust qualities of service will continue to underpin business processes. A bank will still need to reliably process transactions, and a retail store will still need to maintain inventory levels. The real power of this new trend will be the interactivity of Systems of Engagement and efficiency of Systems of Record harnessed together.

This sounds like a lot of work?

Certainly to re-engineer every existing touchpoint with every user would be many, many years of development and investment for any organisation. However, if Systems of Engagement will be the source of differentiation for organisations then doing nothing is also unlikely to be a sustainable option. The key will be identifying and understanding the most critical moments of engagement and looking to improve them in a prioritised and pragmatic fashion.

Who will benefit from Systems of Engagement?

Potentially all parties could benefit. There is certainly an upside for Systems of Engagement for the consumers of their services and the organisations they serve, be that enterprise users or consumers. Systems of Engagement focus competitive differentiation on effectiveness of the people using them, rather than purely on the organisation providing the service as is the most often the case with Systems of Record, so it is an indication of the increasing empowerment of the end user. In addition, in adopting a Systems of Engagement approach organisations are in a position to steal further competitive advantage over and above what they achieve through their Systems of Record.