In IoT: the battle between Power and Connectivity

In IoT: the battle between Power and Connectivity

You can’t have it all!

There are some interesting design considerations when building your IoT architecture.  In an ideal world you wouldn’t have to worry about powering your devices, nor about the cost or technical constraints of getting data back centrally, for further analysis and action (typically to some kind of cloud-based service).  But, back in the real world, this needs careful consideration.

In many IoT use cases the sensors can generate a substantial amount of data, as discrete devices they’re not that smart.  This raises an issue around how (or should!) you get all that data off your sensor(s) and back for analysis.

So, what are the options?  Let’s explore:

Capture everything

You just stream everything back to the cloud.  Right?  Interestingly this was the view Microsoft and AWS were proposing, up until fairly recently.  Just connect all your sensors directly to the cloud and it will all be fine.  I’m guessing they tried to implement a few real world use cases and came a cropper!

Pros? Well, you (in theory) get everything that “happened” and so by using advanced analytics such as Machine Learning you should be able to build and accurate model of cause and effect.

Cons?  There are many!

You have to get all that data from your devices back to your cloud.  Across a large scale deployment this could easily be 10,000+ data points per second, in some use cases much more!  Lots of data needs lots of connectivity bandwidth, equals lots of cost.  And current technology doesn’t necessarily help as much as you might like

Excuse me for dipping into some technicalities.  Network protocols such as TCP/IP and HTTP are great at transferring large-ish volumes of data reliably but not so great at sending lots of small pieces of data especially on a frequent basis (e.g. temperature is 10.0C, temperature is 10.5C, temperature is 10.1C) , nor to be frank where you need “real-time” response (forklift truck is approaching a door and I need to open it.  NOW! Not in 10 seconds time!).

The overhead of the protocol (which ensures the data gets where its is meant to go, in one piece) can exceed the actual data point size (i.e. 10.0C, device ID, date/time stamp) many times over.  Effectively, you are wasting a lot of bandwidth.  And to the point of “real time” delivery it doesn’t guarantee when the data will arrive.  It might be <1 second (ok) or it might be 1 minute (instant failure as the forklift crashes into a closed door).

Ok, so some of our traditional network protocols may not be fit for purpose here.  Added to this is the mechanism used as the underlying carrier.

In many IoT use cases Wi-Fi is available and is “in theory” free (of course it’s not really but lets assume it is for now).  One example I looked at recently involved capturing a (near) real-time view of retail store inventory, across tens of thousands of items. It was generating around 5 Mbytes/second, just from one store.  Scale that up to 500 stores and you have a serious data volume headache, no matter how good your Wi-Fi or WAN provision.  I won’t even go into Wi-Fi black-spots and the challenges of enabling reliable store-wide coverage.

Increasingly IoT use cases will provide value by indicating the conditions at remote locations (anything from utility plant monitoring to smart agriculture).  Here you have neither the guarantee of power nor high bandwidth networking.  Streaming all the sensor data is likely a no no full stop, unless you change batteries every week and have big pockets to pay your cellular provider!

Capture Nothing

Ok, this may be a little extreme, after all, isn’t IoT all about connecting devices and sharing data?  Effectively this is the model we had before IoT came along and in reality have now in many situations.  Build all the compute logic and power into the device so it is fully automated and able to implement a local decision.

This approach is a retrograde step as you are unable to collate and analyse what is happening centrally, in any vague form of “real-time”, to better determine cause and effect and then do something about it.  How do you manage/upgrade/fix the device?  Well, you need to send an engineer out to site. Much of our power, road and utilities networks still operate in this fashion through decades-old SCADA technology.  Transforming from this “legacy” to the “possible” is what is exciting many people about IoT.

Capture all, share some (Distributed Processing)

As with many things in life, a balance between the extremes can provide a pragmatic solution.

You may have heard of terms like Edge compute and (from some companies) the concept of “fog” compute.  Some may scream and wail but I’m going to lump them together.

What are they all about?  Basically its distributed computing. The ability to determine where data gets ingested and processed/analysed and hence what gets output/passed “back up the chain” for more deterministic analysis and (let’s not forget!) action.

Distributed computing?  It is nothing new. Client/server from the 90’s/00’s is form of distributed computing and in fact most of your experience on mobile devices and desktops involves some form of distributed compute, even if it is just your web browser interpreting and displaying a UI from a remote server which in turn gets data resource from another server, yada yada.  The client does a bit of work, sends something to the server which does more work (perhaps talks to another server) and then sends “something” back to the client to enable it (or you!) to do more work and so on.

At the extreme of “non-distributed” compute you either have the client (your PC/laptop/mobile/tablet) doing all the work (imagine a game of snake or basically any non-network-enabled application, perhaps calculator or notepad) or you have the server doing all the work (the old mainframe model) where at best you get a UI “into” the mainframe via a dumb device (as I did when at University on a DEC VAX 8650 in the early 90’s, using a green screen terminal).  Right, enough ancient history, back to IoT…

At a simple level distributed compute may be a bit of logic on the sensor device itself that aggregates incoming data and only sends a summary, perhaps every minute/hour/day or when certain thresholds are met or exceeded. Suddenly you have dramatically reduced the volume of data that needs to be transmitted back because your device has some “intelligence” (by which I’m defining as the ability to make a “local” decision based on local inputs and some decision model, however “basic” this may be).

But remember that every bit of compute has a cost in power.  The more compute that gets done at the sensor level the more power is required.  No one wants to be out replacing batteries every week, many business cases fall apart unless the battery lasts for years without replacement.

Consideration also has to be given to keeping your Edge compute device optimal.  Typically over time the local decision logic will get improved and your device will need to be refreshed with a new model.  How is this done and how do you manage which device has been updated and which has not?  Bear in mind you may have limited network bandwidth to transmit updates and monitor the status of your network.  We are not operating in the world of Windows 10 updates.

And I won’t even go into the aspects of security (even though it is very very important, as I’m sure we’ll see in the future, when people forget).  How do you ensure your remote devices are valid, patched, authenticated and are sending “true” data?  Another topic, for another post.

Coming back to the point of the article

So what’s the issue?  If you have power and high bandwidth networking perhaps there is none, perhaps IoT of some form is already yielding benefits.  However, some of the largest transformational impact of IoT will come in use cases where power and networking are (currently) limited.  You can’t do high intensity compute or high bandwidth networking when you are power and network constrained.  Here you need a distributed compute model to be effective.

So, what’s the trick of where you place your compute?  Well, it’s very use case dependent.

If you are looking to improve machinery uptime through predictive maintenance you may be operating in factory environment where power isn’t an issue.  Here a standard compute device with a Linux or Windows-based operating system (ie. fairly heavy weight) may be fine.  Your edge compute engine can perform a useful amount of analysis in real time on the data received from multiple sensors and perhaps do some local calculations to reduce network traffic and/or if a real-time response is needed (the latter being “old skool” Operational Technology (OT) that has been in place for decades, albeit usually in a proprietary, closed manner).  These operating systems will have the ability to update themselves and their models as things improve.  You get the benefit of real-time response and intelligent, self-learning response.

However, let’s look at more remote use cases.  Whereas your personal cellular contract may let you have 20+Gb/month for a relatively low fee this is not (currently) the case for sending IoT “business” traffic where Wi-Fi is unavailable or impractical to implement.

So, think of scenarios where you are truly remote and have to rely on some form of long distance network.  Maybe your data allowance is 1 Mbyte per day at most, perhaps less.  Typically here you don’t have “grid-based” power so are reliant upon battery or other power sources (e.g. solar).  Examples would include smart agriculture (where you are measuring the factors controlling crop growth or animal fertility) or flood warning (where you are measuring river levels across 100’s of miles to determine the impact and reduce loss of life).  Here there is very limited compute capability and limited data transfer capability.  The more compute you do the less the battery lasts and the current technology limits on long distance data transfer mean that you can’t transmit large volumes of data anyway.  Sounds like you can’t have your cake and eat it.

However, concepts like mesh networking which allow multiple “slave devices” (with no direct Internet connection) to communicate with one “truly connected” device can be used in smart metering, agriculture or any remote context.  You can have one “higher power/Internet connected” device and a whole bank of lower power indirectly connected devices.

To give some context, mesh networking is used on consumer devices like Sonos, Google’s new Wi-Fi (which BTW is brilliant if you have wi-fi black spots at home with only a single router) and Samsung’s SmartHub devices (just to name a few).

Back in the non-consumer world, many environments cannot support traditional operation systems with their continuous processes and power drain.  Maybe you are back to a light weight (typically) C coded (efficient!) application that does a specific job and thats it.  Or there are light weight, IoT focused Operating Systems that could be appropriate.  They do the bare minimum to provide the required functionality and nothing more.  We are back in the world when Bill Gates & Steve Allen had to cram BASIC into 16k (I think the original spec was 8k but they couldn’t get it below 12k) or my first home computer, a Sinclair ZX-81 which had 1k of RAM (even in 1k you could create games, albeit usually via Assembly language which offers a level of efficiency greater than C, if you can get your head around the approach, most modern programmes wouldn’t know where to start!).  These guys got very tight and very clever in minimising compute power.  Most programmers have lost this skill with their abstracted development languages and Gb’s of RAM.

Remember, distributed computing doesn’t just mean a simple 2-layer “sensor/cloud” model.  It is possible to create hierarchies whereby the sensor just transmits to a local “edge compute” device, which does the “required” intelligence (typically receiving from many local sensors, perhaps 100’s) and then forwards relevant “events” up to the cloud.  There are scenario’s I’ve worked on where this layering could be 4-5+ levels deep.  Managing such an environment is not to be sniffed at.

In Summary

Much IoT is currently focused on industrial applications where the results are generally hidden from the end consumer (maybe the train is more reliable, maybe the company can make their products cheaper, maybe they pass this cost reduction on to you (or not!), perhaps they can offer new models of service delivery).

The point where IoT will become more ubiquitous globally will come when power and network transmission limitations are transcended but in the meantime, utilising IoT solutions that allow distributed compute (whatever you call it!) will help balance use of power with network transmission costs and capabilities to build a functioning business case.  Look at the capabilities of any platform here, when making your technology assessments.

Thanks for reading, do please add your thoughts and comments.

 

Richard Braithwaite.

Founder, IoT Advisory Ltd.

www.iot-advisory.co.uk

Have Your Say: