Chipping away at the energy data silo – part 1

As the title suggests, this is the first of two somewhat technical posts describing my experiences of pulling energy data from smart meters across the University of Lincoln (UoL) campus for LiSC’s Electro-Magnates (EM) project. I felt it might be useful or interesting for others to read as a blog post (trying to avoid using ‘opinion’ post!) on what I learned of this niche area. Many organisations use energy monitoring systems similar to Lincoln’s and their carbon/sustainability people may want to understand how they might begin to break free of the incumbent constraints of an energy data silo. There are many ways to do this using a wide range of tools and bespoke coding – I won’t go into this in detail as my *primary* interest is research with open energy data – the development tools with resultant code are just conduits in getting to the gold; the empirical peer-reviewed research that can then take place. My philosophy behind this perspective is a bit at odds with my own background; diesel engineer turned computer scientist– I know full well the value of good tools in both physical and virtual form to ‘build’, however the uptake and engagement with end-user-centric energy interventions (user-centred design approach), or any technology-enabled intervention for that matter, followed with an impact evaluation of their experiences are far more important than mere tools. It is probably the case that if you do have a genuine interest in open energy data you are planning on doing something more interesting with it than just going through the process of making it open in the first place.

Figure 1 Left to right; Multilog data logger with GSM modem allowing remote access to energy data

For several years I have been accessing UoL’s smart meters and pulling off energy consumption data for use in our EM project. A good relationship with our estates sustainability team facilitated timely access to the meters and monitoring software, highlighting the importance of interdepartmental relationships when attempting to enable pragmatic research.  The background on Lincoln’s smart meter infrastructure reveals arcane hardware overlaid on modern, using dated and limited software from an era still suffering growing pains of moving from 16bit to 32bit platforms; the mid-90’s. One of EM’s requirements was the ‘opening up’ of our energy data so third party applications developed by anyone, and indeed our own study trials could use it. Sounds simple but it’s actually a tall order due to the currently available infrastructure at Lincoln. Even though we do currently have a working means of making the energy data open, it’s still not 100% where we would like it to be in terms of update frequency and reliability.

Figure 2 Data silo vs. internet of things

A little more technical detail – yawn inducing for some!  – each smart meter has a data logger and GSM modem with SIM card strapped to its back, see figure 1; energy data is stored in the logger and can be pulled by dialling into the GSM modem. Dialling in? Yes we currently dial-in (not for much longer though, more on that later) using a standard PCI modem over an analogue telephone line, basically a traditional data silo using a Machine-to-Machine (M2M) connection, see figure 2. This approach is not uncommon as you might think, for example remote weather stations may adopt it, likewise for some road/motorway signs where the near ubiquity of 3G/WIFI technologies or even GPRS is absent. In order for us to schedule frequent dialling-in to get the most recent energy data we had to use ‘specialised’ software from the mid 90’s called ‘Multilog’, see figure 3.

Figure 3 The delectable Multilog software circa. 1995

This process of dialling-in isn’t to be confused with using GPRS or 3G or any of the other ‘wondrous’ by comparison methods to pull data remotely, as we aren’t accessing the internet or intranet using venerable TCP/IP. We pull the energy data by initiating a ‘data call’ using the  Circuit Switched Data (CSD) transmission method for data transfer, so that data transfers much the same as it would during a FAX call – extremely slow at around 9.6kbps. An enabling feature of this is the receiving end SIM card you are dialling into (bolted onto smart meter) must be allocated a mobile ‘data’ number, not a voice number, and have CSD enabled – all of which can only be setup at the mobile provider’s end; importantly, not all mobile providers offer this service in the UK. For the more nerdy types out there and lovers of old tech, the following mobile provider CSD settings are required for talking to the receiving SIM, with corresponding Multilog network settings:

 

Mobile Provider Settings:

· 9600bps

· 8 data bits, No Parity bits, 1 Stop Bit

· Non-Transparency enabled (also known as error corrected or “RLP”)

 

 

 

 

 

 

 

 

 

 Figure 4 Mobile provider CSD settings and corresponding Multilog network settings

Each hour Multilog initiates a scheduled data call or ‘dialling-in session’ with each session taking ~30 minutes to get round all the smart meters for the most recent data,  the time quoted assumes all meters are online with no power-pack failures of which unfortunately we have had many. If any meters are offline Multilog subjects you to repeated dial-in attempts which can increase a session to 1 hour+.  The result after each successful dial-in session is we have pulled the most recent energy data from the meters which are then stored locally in Multilog’s ASCII format .dat files, one .dat file for each meter of which there is most likely big chunky holes in the data– all rather unpalatable as a data format. By holes I mean some of the meter readings may be missing simply because of the slow process of dialling into all the meters, the meters themselves log energy readings every 30 minutes so we could easily miss the latest one. Suffice to say it’s predictable that the latest energy data available is usually at least 60 minutes old by the time Multilog stores it. This is the best achievable near-realtime data we can get given smart meter infrastructure that captures data at 30 minute intervals and uses GSM modems and data calls for energy data retrieval. At this stage we are still many steps away from making the energy data available to third party applications in an easily consumed open format.

What happens next? Most institutions the size of a university will have Carbon Management Software (CMS) that pulls in and parses these ASCII files for carbon analytics and graphical presentation, as the Multilog software thankfully offers none of this. Multilog data formats appear to be akin to a standard in this area, with interoperability with most enterprise-level CMS. Unsurprisingly, this type of software is expensive with associated yearly licensing and maintenance fees, at Lincoln the choice of CMS is the common Esight package, a web based CMS which stores parsed energy data in an encrypted SQL database.

All of the above is generally an indicative vanilla installation for many organisations; a standard M2M method to get energy consumption data out of remote smart meters and into an enterprise class CMS, purely for consumption by carbon managers and senior management. There is hardly any notion of open energy data in the processes described here as these types of installations are designed with a proprietary mind-set with data locked down. Unsurprisingly, some CMS companies genuinely assume (having engaged in multiple dialogues with them) that by offering graphing and at a push basic widgets, you won’t ever need access to the energy data for anything else, this is a strategic business stance as they want you locked down. The mere mention of offering an open API would likely result in a sharp intake of breath, followed by some business vitriol of ‘they know best’ and ‘data corruption’ (!)

Jemmying the data open

Senior management and sustainability teams are probably relatively happy with the data they get out of their CMS; they just want the numbers to crunch. What if we want more? What if we want to build our own innovative and interactive applications using the energy data and feed it back to the end-users as part of bespoke energy interventions i.e. we want to do some research with real people with measurable results? Who incidentally for the most part are completely unaware of their own consumption behaviours. To do this the data really needs to be smashed open – publicly. At this point there aren’t many viable options to hack a workaround in making the energy data open at Lincoln, the ASCII .dat files that Multilog create contain energy data which would be painful to parse and be locked by Multilog for reading/writing to for an indeterminate amount of time – messy. The Esight CMS database itself is also a no go being locked down and encrypted. The only feasible way we could begin to make the data open was to use Esight’s export data function on a schedule. By using the export function we scheduled an hourly export to CSV format, time-stamped at 30 minute intervals for the current day’s energy usage. Again, the CSV file was far from in good shape, patchy at best. Of course there are ways to fix the holes, develop an algorithm to patch them by looking at previous patterns of energy usage for that specific day/time in the past. Later you can go back and replace these auto-generated readings with the real values when you have them – this amounts to quite a lot of work and all in the name of forcing the data open, tantamount to using a jemmy on it.

At this stage our energy data is open to all by parsing the aforementioned exported CSV files for consumption via public facing REST API’s. The API’s were built in-house as well as using external data storage platforms, however the process is not as robust as we’d like due to the infrastructure constraints. The data is available through Pachube and our colleagues over in the LNCD group who are actively building and promoting university-generated opendata through data.lincoln.ac.uk. Things will soon be a lot better in terms of reliability and update frequency, when pulling from the smart meters; our carbon manager Cara Tabuka has secured funding from the Salix/HEFCE revolving green fund that will remove the whole dialling-in process and place the smart meters within the campus network through wireless or LAN. The same funding may also be used for a pilot sub-metering project which would facilitate further novel research with more granular energy data being made available. Overall the upgrades to the smart meter infrastructure will greatly improve the data collection process giving access to energy data 30 minutes old or better without the reliability problems of using GSM modems.

The technical problems of energy monitoring in a large organisation are not trivial, but neither should they be the most important thing when addressing energy monitoring overall, as I pointed out at the start of this post. We know how to build and develop systems to monitor energy effectively; it’s largely a matter of getting the funding to do it with a keen eye on scaling. What we don’t know much about is what to do with that data when it is open – visualise it in a compelling fashion? design interventions around it? post it to a data portal and hope someone else does something with it? how do we evaluate an end-user’s experience of consuming energy? These are just a few of the research areas that need to be rigorously investigated around open energy data in an organisational context, otherwise we are just building stuff to open up data – simply because we can.

If you are interested in how to effectively design an organisational energy intervention from a user-centred design approach, you may find our latest peer-reviewed paper useful: ‘Watts in it for me?’ Design implications for implementing effective energy interventions in organisations. The paper will be presented at the CHI2012 conference in Austin, Texas this coming May.

Leave a comment

Subscribe Scroll to Top