April 19, 2024

Gridlines: BABI Boom!

by Michael A. Marullo, Editor in Chief

No, my spell checker hasn’t gone haywire (or is it “haywire-less” now?). In this case, I’m just using a play on words to bring your attention to something that I think is going to give that other Baby Boom a run for its money. What I’m referring to is the rapidly rising level of interest and opportunities in BA (Business Analytics) and BI (Business Intelligence).

Some of you may consider these terms – BA and BI – interchangeable, but I see them as having subtle differences. I like to think of them this way: If you do BA correctly, it will lead to BI – rather than the other way around. Maybe it’s just my research mentality that tells me you must perform analysis (on the data) to derive information. We may disagree on that point, but bear with me on this for a bit…

Even before one reaches the BA phase, it seems to me that there’s an implicit data acquisition step in there that must be satisfied. Then, once you’ve gathered some useful data you can do the analysis – and provided you do that correctly – the output is Business Intelligence. In other words, BA is the “cause” that drives BI, and BI is the “effect.”

But the real point I want to make is that we have a process – BA to BI – that can be applied to at least three areas of the utility enterprise right away. Perhaps the most prominent and most familiar of these is the burgeoning AMI (Advanced Metering Infrastructure) arena and the data repository that has spawned this new (okay, not new, but drastically redefined!) area we commonly refer to as Meter Data Management – or simply MDM – which could also be interpreted as Millions of Daily Measurements!

This isn’t the first time around for this sort of thing by any means, but one could argue that it’s probably the most intensive data processing task we’ve seen since the so called Y2K (Year 2000) run-up – at least for something that applies broadly across the utility industry landscape. But MDM is only one piece of what I see as drivers for this BABI Boom.

Another piece is something we often refer to as “non-operational data.” This generic term refers the mounting volumes of data being collected and stored by substation devices – mainly relays – that provide an inside glimpse of what’s going on at the substation, aside from the “operational” aspects, that is. To be sure, we keep close track of supervisory control and data acquisition (SCADA) operations such as tripping/resetting breakers, changing transformer settings, reconfiguring switches and the like – all pretty much in real time. That information is recorded on the outbound side of communicating devices, and the results of those control actions are brought back with every scan – once per second in most cases.

But that’s only a fraction of the data that’s being gathered and stored locally (i.e., at the substation). Oscillography pertaining to various aspects of power quality; analytical data around sequence of events; alarm data, device operating durations, and the number of operations of a particular device are all stored and waiting for someone to access this valuable information for reliability analysis and a host of other purposes. We’re not talking just a few megabytes of data here – these are huge volumes in many cases… easily petabytes, over time!

So why hasn’t this data been harvested previously, you might wonder? Mainly because of insufficient communications bandwidth, which brings us to another trend that I’ll address in a minute. But for now, let it suffice to say that the problem is rooted in outdated, outmoded real-time communications networks that are only recently catching up with the times.

As most SCADA engineers are painfully aware, a huge portion of the mission-critical communications that we depend on for these systems to function properly is still operating in the 1200- to 2400-baud (bits-per-second) range, often dictating dedicated lines to each and every substation RTU (remote terminal unit). Just bringing back the critical operational data consumes all of the available bandwidth, leaving a treasure trove of non-operational data stranded in remote storage silos with essentially no way to access without disrupting or derailing the most vital real-time data exchanges.

The third – and potentially the most prolific and diverse – database is one that only barely exists today. Perhaps you’ve come across the terms/phrases, “ubiquitous data acquisition,” or maybe “grid sensors” in recent readings or conversations? These refer to a whole new genre of data gathering; one where it is not only feasible and economical to gather single points of data over a broad geographical area, but also where the data types can be widely diverse with sample rates that are measured in months or years, rather than the usual minutes or seconds.

I won’t get into the vast and rapidly unfolding details here, but we are talking about potentially millions of points that measure everything from atmospheric conditions and the conductivity of soil to specialized alarm detection and surveillance of areas and/or devices that have been previously a purely manual task. Part of the solution set is new micro- and nano-technology, but the other key piece is the adoption and proliferation of less expensive communications systems (notably RF mesh, as compared to conventional wired and wireless alternatives) that are both inexpensive to buy and deploy as well as easy to configure and maintain for these specific purposes.

Once these giant data repositories are created, the next challenge will be getting from the BA phase to the BI phase. My view is that first we will see a protracted analysis phase (currently underway) to explore these data mines and determine what is feasible. Then, once we have an idea of the possibilities, we will begin to see a rapid transformation from raw data into valuable information. Who will be first and who will do it best when it comes to exploiting the BABI Boom? Well, that remains to be seen. – Ed.