Category Archives: iot

IoT reference architecture for the edge domain

During my talk at Thingmonk last week, I showed the snippet of an IoT reference architecture that I developed covering the edge gateway domain.

Since I’ve yet to find amongst the many published IoT reference architectures any that decompose the edge and because several people asked for a copy, I thought I’d share it here for people to reuse. It’s simple but hopefully useful, and provides a starting point for proper design of solutions that exploit the edge.

Architecture overview diagram


Explanation of capabilities

The reference architecture shown above describes the key set of technology and organisational capabilities required in the deployment of edge applications.


  • Physical security mitigates the risk of tampering when devices are deployed in the field.
  • Device platform security protects the software platform, application code and data from unauthorised access.
  • The Device provides the hardware platform for application code and logic deployed at the edge.
  • Analytics models describe deployed analytics logic consumed by analytics runtimes in the edge software platform.
  • A Local Area Network provides the mechanism for the device to communicate with locally deployed applications, sensors and actuators, e.g. Bluetooth Low Energy, Zigbee etc.
  • Local Monitoring and Management tools enable administration and break/fix by local technicians servicing the installed hardware and software.
  • A Remote Monitoring and Management agent on the device enables diagnostics, monitoring and control of edge devices from the centre. This would be the preferred mode of operation since it does not require physical access to the deployed system.
  • Sensors and Actuators provide the link between the assets themselves and the device.
  • Assets monitored and controlled by the edge installation — i.e. the “things”.
  • Local applications support business operations conducted in the field.
  • An Application Runtime provides the programming environment for application logic deployed at the edge, e.g. NodeRED.
  • Control code is application logic deployed in the Application Runtime to orchestrate interactions with sensors and centre.
  • Sensor SDKs (Software Development Kit) facilitate development of sensor-driven applications to run at the edge by providing developer-friendly programmatic access to the sensor hardware.
  • Back-end SDKs facilitate communication with the centre by providing developer-friendly programmatic access to services at the centre and/or provided by third-parties.
  • A Wide Area Network connects the device to the data centre, for example via a cellular network or via wifi to the corporate network.


  • Device and asset management is the central service management capability that deals with the ongoing monitoring and support for the hardware and software installation in the field.
  • Device installation and maintenance is the field-based service that installs and maintains the physical device, sensors and assets deployed in the field.

Parsing LLAP packets using NodeRED on the Raspberry Pi

It’s taken a few weeks but here is the next instalment in my promised series of articles. Part of the scenario requires me to detect a particular scenario using a remote sensor and relay it on to a variety of other systems via my Raspberry Pi. The sensor itself is assembled using the rather brilliant Ciseco Wireless Inventor’s Kit which provides all you need for prototyping remote sensors and transmitting data over a short range radio link back to a waiting Pi.

It’s a pretty simple yet powerful set of kit. You build your sensor using the breadboard and XinoRF module supplied which communicates with a Slice of Radio transceiver module you attach to the Pi’s GPIO pins. Your applications then send and receive data to and from the remote sensor via the Pi’s serial port and you’re away. The protocol used for the serial communications is designed by Ciseco and is called Lightweight Logical Application Protocol (LLAP). It takes the form of a simple fixed-field text datagram and as such can be parsed by pretty well any programming language that can interrogate the serial port.

In my case I’m using NodeRED to parse the LLAP packets so I can invoke other services in response to an event triggered via a sensor. The event is detected by a switch (a pressure mat in fact) that sets the D03 pin on the XinoRF module to be “high”. The nice thing about how the XinoRF works is that it operates in an event-driven way — i.e. you don’t need to poll to check the state of the pin, it transmits when there is a change in the state from “low” to “high” or vice versa. The other XinoRF pins perform different functions in different ways, but I thought I’d show how to at least send and receive so others can tailor to their scenario.

I’m assuming here that folks are familiar with the NodeRED concepts, so will get straight into the meat of how it works.

Configuring A serial input node to Receive llap packets into a flow

The first thing to do is configure the serial input node in NodeRED. The settings for my node are shown below.

Screen Shot 2014-10-14 at 17.56.37

You can confirm the serial port name using the RasWIK documentation and tools. A particular point to note is the Split input settings which determines at which point the input node offers up data to the next node in the flow. This can be set to a delimiter, or set to a timeout, wherein if no data is received within a certain period, the transmission is considered complete. I discovered that the latter timeout approach worked with a setting of 250 milliseconds, though it does introduce the possibility of multiple LLAP packets being received in one go, which we now need to address within the next node in our flow.

Creating a function node to segment multiple llap packets into individual messages

For greatest efficiency want all subsequent nodes in the flow to be designed to ingest a single LLAP packet in a single message. To ensure this downstream behaviour is honoured we create a function node after our serial input node to check for multiple packets and segment them into individual messages if so. The way NodeRED is designed to work, we can return an array of messages and it will cycle through each message in turn, invoking the remainder of the flow as if triggered by a single message.

The first thing we do in our function node is test for the simple case of a single packet which we can test by the length of the payload, LLAP packets being 12 characters in length.

var instring = msg.payload;
if (instring.length<=12) {
   console.log("Single packet: "+instring);
   return msg;

If the length of the payload is longer, it means we have multiple packets so we need to iterate over the payload, extracting each packet into an array of new messages to return at the end of the function.

else {
   var packets = [];
   console.log("Multiple packets: "+instring);
   while (instring.length>=12) {
      var packet = instring.substr(0, 12);
      console.log("Unspliced packet: "+packet);
      packets[packets.length] = {payload:packet};
      instring = instring.substr(12);
      console.log("instring is now length "+instring.length);
   return [packets]; 
return null;

An important point to note (highlighted above) is that although we have created an array of payload objects, NodeRED requires that we in turn encase that array in another array when we return it for processing downstream.

Parsing the LLAP packet

In our subsequent function node, we can now parse the single LLAP packet to determine whether we have a “high” event for the D03 pin we’re interested in. The format of an LLAP packet in this case would be:


Our code simply needs to check the packet that is received, and parse it see if it corresponds to the above event. Note that in a simple scenario we could just do a direct string comparison between what we’ve received and a template packet such as that above, but if we have multiple stations (denoted by characters 1 and 2, “–” in the example) then we might have more complex flow control logic. My function node simply emits the word “HIGH” if we’ve got a D03 “high” event from the XinoRF.

var thePayload = msg.payload;
var pin = thePayload.substring(3,6);
if (pin == "D03") {
   // Now test if it was HIGH
   if (thePayload.substring(6,10) == "HIGH") {
      console.log("HIGH on D03");

Now in my scenario, the switch (pressure mat) could toggle on and off in quick succession which would technically send a number of LLAP packets back to the Pi. Within a certain time window, I’m only interested in a single “high” event, so have added some logic using NodeRED’s context to test when I last received a “high” event and only taking action if it was more than a minute ago.

      var timeNow = new Date();
      console.log("timeNow is "+timeNow);
      if (context.lastHigh) {
         console.log("context.lastHigh is "+context.lastHigh);
         var lastTimeMillis = context.lastHigh.getTime();
         var currentTimeMillis = timeNow.getTime();
         var delta = currentTimeMillis - lastTimeMillis;
         if (delta < 60000) { 
            console.log("Less than a minute since last HIGH: "+
               delta+" ms.");
            return null;
         console.log("Over a minute, actually "+delta+" ms.");
      } else {
      context.lastHigh = timeNow;
      return {payload: "HIGH"}
return null;

So, if we get a “high” event and we haven’t seen one for at least a minute, the subsequent nodes in the flow will receive a packet with a payload of “HIGH”.

And with that we’re free to continue our processing, which could be as simple as a debug node. Here’s a snapshot of my NodeRED workspace — in this case I’m using the “high” event to trigger a Tweet.

Screen Shot 2014-10-14 at 18.33.17



Configuring the Raspberry Pi to share a Mac’s internet connection

One of the great things you can do with a Mac is share your internet connection, and I’ve found this particularly invaluable for doing development with my Raspberry Pi, and especially when I want to take it somewhere to show people. Normally it connects to my home router where I can configure the router to have a static IP address so I know where to find it and off I go, not so easy when you’re mobile.

There are a number of articles out there claiming to show you how to make it all work when you Google for it, but it still took me a fair bit of puzzling and piecing of the bits together to come up with something that worked. So, to save you that trouble, I’m recording my configuration here.

First set up your ethernet adapter

The mechanism I’m using to connect my Mac (Macbook Pro Retina running OS X Mavericks 10.9.5) is via good old Ethernet. For this you’ll need the (rather expensive) Thunderbolt to Ethernet adapter if your Macbook (like mine) doesn’t have an onboard Ethernet socket. You can get cheaper non-Apple alternatives but I’ve no experience of whether they work or not. Once plugged in, open up System Preferences, and then Network. You’ll notice your Ethernet port appears on the list of adapters on the left hand side. You can give the configuration a name of its own, I’ve imaginatively called mine “Raspberry Pi Ethernet (home)” as you can see below.

Screen Shot 2014-10-14 at 15.49.52Now of course DHCP would be fine, and whilst you can probably guess at which IP the Mac’s DHCP server will give you each time, it’s more predictable to have a static IP defined, particularly when running your Pi in “headless” mode. There are therefore some settings we need to set up to make this hang together, not least a static IP configuration for the Mac as it will appear to the Pi so we can use it as our IP gateway. As in the above picture, use the following settings:

  • Configure IPv4: Manually
  • IP Address:
  • Subnet mask:

Because we’re using a static configuration, we now need to set the DNS server settings ourselves. Without any DNS servers set, the Ethernet connection (and therefore the Pi) will not be able to resolve any of the domain names we throw at it.

Click the Advanced… button, then the DNS tab and then the + button to add a DNS server entry.

Screen Shot 2014-10-14 at 15.59.09

You can simply lift the DNS settings from your Wifi configuration (provided of course your wifi is already up and happily resolving!) or indeed use some public DNS server settings. I’ve got a mix of public DNS, a couple from work and my home router. Click OK when you’re done to save the settings.


Having set up the Ethernet port, we now need to tell the Mac we want to share the wifi internet connection with devices connected via Ethernet. It couldn’t be easier — back to System Preferences, select Sharing and tick Internet Sharing. Select Wi-fi and Thunderbolt Ethernet as the points where you want to share from and to as per the screenshot below.

Screen Shot 2014-10-14 at 16.06.29



I was surprised how resilient the local configuration between the Mac and the Pi was when I was fiddling with these next steps. Getting the Mac and the Pi to see each other is the easy bit, the harder bit was making the Pi be able to see the internet beyond. Apple have made things easier still by making the Ethernet adapter work out when you’re using it to connect to another machine and automatically flipping into “crossover” mode so you can connect the Pi with bog standard Ethernet cable. Connect the Pi to the Mac’s Ethernet adapter and start it up.

First time of asking the Pi will be assigned an IP of by the Mac’s DHCP server (assuming your Pi by default is set up for DHCP from the Ethernet port). I’ve not tested the theory of whether that ever changes, but to make absolutely sure (and certainty is definitely preferred when doing demos) we will configure the Ethernet adapter with static settings based on the Mac’s configuration. Once the Pi is started, connect via SSH from the Mac and once logged in and on the command prompt, enter the following command:

sudo vim.tiny /etc/resolv.conf

This will open the Vim editor (same commands as Vi). You want a single line it as follows:


This tells the Pi to use the Mac as its DNS server, rather like you might expect with a connection to a home router.
Having saved the changes, we now need to set the static configuration for the Ethernet port. We’ll use Vim again, this time enter the following command:

sudo vim.tiny /etc/network/interfaces

This time there are a few more lines to add. If you want to make sure you don’t forget your existing Ethernet settings, you can comment them out with a hash. Otherwise, your settings should look as follows:

auto eth0
iface eth0 inet static

This tells the Pi to adopt the address permanently using the Mac as the IP gateway with corresponding subnet mask.

You’re now all set. For good measure, restart the Pi and all the changes should stick.

The Proof in the PUDDING (or PI)

In my case I’ve been using TightVNC to access the Pi’s desktop, so as a crude test that everything was hunky dory I simply open up my Midori browser and try and hit the public internet. For posterity, everything looks as it should when I try and hit this blog:

Screen Shot 2014-10-14 at 16.29.13





Sending SMS messages using Twilio and Bluemix

I’ve been tinkering with an Internet of Things project at home for a while which I’ll write up in due course, but in the course of doing so have knocked up a few useful fragments of function that I thought I’d share in case other people need them. The first of these is a simple Node.js app to send an SMS message via Twilio using IBM Bluemix.

There’s lots of material on Twilio and Bluemix but by way of a very rapid summary, Twilio provide nice, friendly APIs over telephony-type services (such as sending SMS messages), and Bluemix is IBM’s Cloud Foundry-based Platform-as-a-Service offering to enable developers to build applications rapidly in the cloud. Twilio have created a service within Bluemix that developers can pick up and use to enable their applications with the Twilio services. One of the things I wanted for my application was a way of notifying me that something had happened, and a text message suited my needs nicely. Twilio provide a free trial service with a few restrictions which you can upgrade once you want to do something serious.

To begin with, I created myself a Node application on Bluemix using the Node JS Web Starter application boilerplate provided:

My approach was to create a simple HTTP-based service that I could invoke with the destination phone number, and the message itself as parameters. To make the Twilio service available to my Node application, it was simply case of adding the service to my application in Bluemix. Twilio is listed as one of the Mobile services:

Once you have added the Twilio service, you configure it in Bluemix by providing the Account SID and Auth Token values that you find on the account details page once you have registered and logged in to Twilio.

The Node JS Web Starter boilerplate creates a simple template for a web server that serves up pages using the Express framework on top of Node. Express is handy, in that it provides a useful framework for handling HTTP requests, so I decided to stick with it for my HTTP service. The first change I needed to make to the boilerplate was to add a reference to Twilio in the package.json file so that the modules would be available to my code.

   "name": "NodejsStarterApp",
   "version": "0.0.1",
   "description": "A sample nodejs app for BlueMix",
   "dependencies": {
      "mongodb": "1.4.6",
      "express": "3.4.7",
      "twilio": "^1.6.0",
      "jade": "1.1.4"
   "engines": {
      "node": "0.10.26"
   "repository": {}

When you push your updated code to Bluemix, Bluemix automatically does the npm install to go and fetch the modules based on the package.json.

Within the app, you then need to set up the Twilio package ready for sending messages. First, we need to require the Twilio package so we can access the service from our code, and then retrieve the Account SID and Auth Token values configured in Bluemix from the VCAP_SERVICES environment variable that Bluemix provides to the Node runtime.

var twilio = require('twilio'); // Twilio API
var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
var twilioSid, twilioToken;
services['user-provided'].forEach(function(service) {
   if ( == 'Twilio-ph') { 
      twilioSid = service.credentials.accountSID;
      twilioToken = service.credentials.authToken;

Note that Twilio-ph is the name I gave to the Twilio service when I added it to my application in Bluemix, yours may vary so remember to change it if different.

The environment is now set up, so now we need to create our HTTP handler using Express to form the basis of our service.  I’ve added Basic Authentication to my handler to prevent spam to my Twilio account, this is nice and easy to do using Express.

// Configured environment variables to protect Twilio requests
var USER = process.env.USER;
console.log("USER = "+USER);
var PASSWORD = process.env.PASSWORD;
console.log("PASSWORD = "+PASSWORD);

// Basic authentication to restrict access to my services.
var auth = express.basicAuth(USER, PASSWORD);

I’ve used environment variables that I’ve set in the Bluemix environment, clearly in a production environment one would use a proper directory. You can set your own environment variables within your application by going to the Runtime view of your application and selecting the USER-DEFINED button on the main panel.

The HTTP handler simply looks for the URI pattern /twilio as a GET request and reads the destination telephone number and the message content as query parameters. The auth object passed in applies the Basic Authentication rule defined previously.

app.get('/twilio', auth, function(req, res) {
   var toNum = req.param("number");
   var twilioMessage = req.param("message");
   var fromNum = 'your Twilio number';

Your Twilio number can be found on the Twilio account details page.

Twilio make it really easy to send a message using their twilio.RestClient API wrapper class. You simply instantiate an instance of the twilio.RestClient class, and invoke the sendMessage method with a JSON object containing parameters describing who the message is to, the number it is sent from and the message to be included in the SMS. You provide a callback function that is invoked when the request is completed.

   var client = new twilio.RestClient(twilioSid, twilioToken);
         to: toNum,
         from: fromNum, 
         body: twilioMessage
      function(err, message) {
         if (err) {
            console.error("Problem: "+err+": "+message);
            res.send("Error! "+err+": "+message);
         } else {
            console.log("Twilio message sent from "+fromNum+
            " to "+toNum+": "+twilioMessage);

Once deployed, the service can be invoked with a URL in the form described below:

If invoked programmatically the authentication credentials will be required in the HTTP header. If tried from a browser, the browser will prompt for a username and password combination. Ultimately you’ll receive an SMS on your phone:
twilio sample
And there it is. I can now trigger SMS messages either from my NodeRED flows, browser or any other apps I might write.