In another few months the August 2003 blackout will celebrate its third anniversary. It seems hard to believe it’s been almost three years already, doesn’t it? Well, let’s do a quick flashback to that rather “dark” period (sorry, I just couldn’t resist the pun) of history to contemplate what has happened since the biggest sustained power outage ever recorded struck indiscriminately and without warning on an unsuspecting populace… well, sort of…
First there was the shock of such a huge collapse across what was once the most reliable power grid in the world. Who could have imagined it could ever happen here? Yeah, right; no one outside the power industry maybe! But I had two different people tell me just days before the blackout – quite matter-of-factly – that such an event was probably imminent. No, I don’t think they had any insider information, just a healthy dose of common sense. It’s been no secret that T&D investment has been severely lacking for a long time and that deregulation has further strained an already weakened infrastructure, so for many, this hardly came as any surprise.
Let us also not forget that we’ll be staring down the barrel of another long, hot summer across a large portion of the North American continent well before August arrives this year. In most areas, the drought is reaching crisis proportions, and just the other day we set an all-time record for the high temperature here for an April day: 92 degrees! Should we expect it to be cooler in July and August? On the contrary, once again this year Mother Nature appears to be setting the stage for another hot, dry summer. That usually means we’ll possibly see power interruptions, rolling brownouts or – and dare I even say this – perhaps another widespread outage?
As regular readers of this column know, I live very near (what used to be) New Orleans, so long hot summers are nothing new here. However, in the aftermath of Hurricane Katrina, we now have a whole new appreciation for the effects of power outages. Indeed, the attendant problems caused by any such catastrophic event – whether triggered by natural disasters, equipment flaws, operational failures or human error – are all too well documented and still fresh in the minds of people all across the Gulf South. (We’re talking about summer outages here, but of course, the threat of blizzards and ice storms will also be looming again as winter approaches in the colder parts of the Upper Midwest and Northeast.)
Yet when we look back at what has transpired since that fateful August 14th in 2003, there just haven’t been many extraordinary changes that would give a reasonable person any tangible assurance that it won’t happen again. Sure, there has been some progress by giving the North American Electric Reliability Council (NERC) reliability enforcement powers (i.e., whereas compliance was mainly voluntary in the past), but that certainly offers no guarantees.
Let’s face it, the North American grid – though a qualified engineering marvel – is a VERY complex network, to say the least. And, it’s a rule that the more complicated the network, the harder it is to model. Yet as an industry, we seem hell-bent on finding a way to prevent future outages. Hey, it’s admittedly a noble undertaking that challenges the engineering mind in a way that is rarely presented. It is arguably a unique engineering problem in a lot of ways, starting with the fact that the grid is a living thing. That is, it is constantly being changed and reconfigured, and to make matters worse, it’s happening at the speed of light! BUT, challenges aside, is outage prevention really where we want to bet all of our R&D chips?
This brings me to the real point of this commentary: Other than the huge (and for some, irresistible) challenge that preventing blackouts poses, why are we preoccupied with prevention when what we really need is a way to recover… faster and more efficiently? I’m not suggesting that we shouldn’t be putting resources into prevention; clearly we should and in fact, we must. However, why is so little apparently being done to deal with the more pragmatic dimensions of outages: Fault detection, isolation and restoration?
More specifically, the 8/14 blackout was a certified disaster from both technical and economic perspectives. But when we put aside the political posturing and blame game that ensued immediately after the event and really examine the collateral damage, here’s what we find:
- The most prolonged outages (i.e., roughly 36-38 hours) caused the majority of the economic damage since this was such a long enough period for refrigeration to be severely depleted, lines to be brought to a complete halt, backup batteries to discharge, etc.
- A substantial portion of the affected areas experienced outages that were a full order of magnitude shorter; many areas as short as a few minutes.
- Virtually no real damage to generation, transmission or distribution equipment was reported throughout the affected areas.
But, what was reported was scores of easily preventable failures that unnecessarily prolonged the outage. These were mostly failures not of equipment or tools, but rather of policies and procedures manifested by staff that had no idea how to bring their portion of the network and associated assets back up after a complete voltage collapse. Things like empty backup generator fuel tanks; broken and compromised emergency equipment; insufficient access to critical staff needed to properly manage and/or execute emergency procedures; and many other similar problems are what really kept people and businesses in the dark for the most prolonged periods.
The upshot of underscoring these failures is not to lay blame, but rather to illustrate the point that had utilities been able to identify the problems and remediate them quickly – say in a few minutes or a few hours rather than a day-and-a-half – far less economic and collateral damage would have occurred. Moreover, most people would have been far more inclined to write it off as a minor inconvenience rather than a loss worthy of litigation.
Sure, everyone would like to feel like the grid is bulletproof and that it will never fail. But when (not if) it does, wouldn’t it be better to be able to recover in a few minutes rather than a few hours or days? The fact is, there is plenty of technology that has been around for a long time that can help minimize outage duration and, hence, the magnitude and intensity of the resultant losses. Why not put a greater portion of what we are currently spending on prevention and put it toward rapid recovery? After all, it’s a lot easier to do the really difficult R&D with the power on than with it off. - Mike
Behind the Byline
Mike Marullo has been active in the automation, controls and instrumentation field for more than 35 years and is a widely published author of numerous technical articles, industry directories and market research reports. An independent consultant since 1984, he is President and Director of Research & Consulting for InfoNetrix LLC, a New Orleans-based market intelligence firm focused on Utility Automation and IT markets. Inquiries or comments about this column may be directed to Mike at MAM@InfoNetrix.com.
©2006 Jaguar Media, Inc. &
Michael A. Marullo. All rights reserved.
First there was the shock of such a huge collapse across what was once the most reliable power grid in the world. Who could have imagined it could ever happen here? Yeah, right; no one outside the power industry maybe! But I had two different people tell me just days before the blackout – quite matter-of-factly – that such an event was probably imminent. No, I don’t think they had any insider information, just a healthy dose of common sense. It’s been no secret that T&D investment has been severely lacking for a long time and that deregulation has further strained an already weakened infrastructure, so for many, this hardly came as any surprise.
Let us also not forget that we’ll be staring down the barrel of another long, hot summer across a large portion of the North American continent well before August arrives this year. In most areas, the drought is reaching crisis proportions, and just the other day we set an all-time record for the high temperature here for an April day: 92 degrees! Should we expect it to be cooler in July and August? On the contrary, once again this year Mother Nature appears to be setting the stage for another hot, dry summer. That usually means we’ll possibly see power interruptions, rolling brownouts or – and dare I even say this – perhaps another widespread outage?
As regular readers of this column know, I live very near (what used to be) New Orleans, so long hot summers are nothing new here. However, in the aftermath of Hurricane Katrina, we now have a whole new appreciation for the effects of power outages. Indeed, the attendant problems caused by any such catastrophic event – whether triggered by natural disasters, equipment flaws, operational failures or human error – are all too well documented and still fresh in the minds of people all across the Gulf South. (We’re talking about summer outages here, but of course, the threat of blizzards and ice storms will also be looming again as winter approaches in the colder parts of the Upper Midwest and Northeast.)
Yet when we look back at what has transpired since that fateful August 14th in 2003, there just haven’t been many extraordinary changes that would give a reasonable person any tangible assurance that it won’t happen again. Sure, there has been some progress by giving the North American Electric Reliability Council (NERC) reliability enforcement powers (i.e., whereas compliance was mainly voluntary in the past), but that certainly offers no guarantees.
Let’s face it, the North American grid – though a qualified engineering marvel – is a VERY complex network, to say the least. And, it’s a rule that the more complicated the network, the harder it is to model. Yet as an industry, we seem hell-bent on finding a way to prevent future outages. Hey, it’s admittedly a noble undertaking that challenges the engineering mind in a way that is rarely presented. It is arguably a unique engineering problem in a lot of ways, starting with the fact that the grid is a living thing. That is, it is constantly being changed and reconfigured, and to make matters worse, it’s happening at the speed of light! BUT, challenges aside, is outage prevention really where we want to bet all of our R&D chips?
This brings me to the real point of this commentary: Other than the huge (and for some, irresistible) challenge that preventing blackouts poses, why are we preoccupied with prevention when what we really need is a way to recover… faster and more efficiently? I’m not suggesting that we shouldn’t be putting resources into prevention; clearly we should and in fact, we must. However, why is so little apparently being done to deal with the more pragmatic dimensions of outages: Fault detection, isolation and restoration?
More specifically, the 8/14 blackout was a certified disaster from both technical and economic perspectives. But when we put aside the political posturing and blame game that ensued immediately after the event and really examine the collateral damage, here’s what we find:
- The most prolonged outages (i.e., roughly 36-38 hours) caused the majority of the economic damage since this was such a long enough period for refrigeration to be severely depleted, lines to be brought to a complete halt, backup batteries to discharge, etc.
- A substantial portion of the affected areas experienced outages that were a full order of magnitude shorter; many areas as short as a few minutes.
- Virtually no real damage to generation, transmission or distribution equipment was reported throughout the affected areas.
But, what was reported was scores of easily preventable failures that unnecessarily prolonged the outage. These were mostly failures not of equipment or tools, but rather of policies and procedures manifested by staff that had no idea how to bring their portion of the network and associated assets back up after a complete voltage collapse. Things like empty backup generator fuel tanks; broken and compromised emergency equipment; insufficient access to critical staff needed to properly manage and/or execute emergency procedures; and many other similar problems are what really kept people and businesses in the dark for the most prolonged periods.
The upshot of underscoring these failures is not to lay blame, but rather to illustrate the point that had utilities been able to identify the problems and remediate them quickly – say in a few minutes or a few hours rather than a day-and-a-half – far less economic and collateral damage would have occurred. Moreover, most people would have been far more inclined to write it off as a minor inconvenience rather than a loss worthy of litigation.
Sure, everyone would like to feel like the grid is bulletproof and that it will never fail. But when (not if) it does, wouldn’t it be better to be able to recover in a few minutes rather than a few hours or days? The fact is, there is plenty of technology that has been around for a long time that can help minimize outage duration and, hence, the magnitude and intensity of the resultant losses. Why not put a greater portion of what we are currently spending on prevention and put it toward rapid recovery? After all, it’s a lot easier to do the really difficult R&D with the power on than with it off. - Mike
Behind the Byline
Mike Marullo has been active in the automation, controls and instrumentation field for more than 35 years and is a widely published author of numerous technical articles, industry directories and market research reports. An independent consultant since 1984, he is President and Director of Research & Consulting for InfoNetrix LLC, a New Orleans-based market intelligence firm focused on Utility Automation and IT markets. Inquiries or comments about this column may be directed to Mike at MAM@InfoNetrix.com.
©2006 Jaguar Media, Inc. &
Michael A. Marullo. All rights reserved.