December 24, 2024

Electric Utility Outage Prediction Models: Assessing Their Accuracy & Implementing Improvements

by Dr. Ronald O. Mueller and Jason Singer

Introduction
Academic and industry researchers continue to press ahead to create ever new and better weather forecasting models (WFM). At the same time, electric utilities are continuing intensive work to improve the accuracy of their outage prediction model(s). And, of course, these two forecast modeling areas are highly inter-related – since weather forecasts are the single most important input to a utility’s outage prediction model (OPM).

But, how do the major users of weather forecasts and of outage prediction models - namely electric utility storm centers - really know that the models and predictions are getting better and therefore can be relied on more during a storm event?

And, if the forecasts are getting better, it is important for the storm center personnel to know by how much, and for what types of storm events are things getting better, and which of the modeling areas is getting better – the weather forecasts or the outage predictions? Lots of really important questions need to be addressed in a quantitative scientific fashion.

To find this out, we need to quantitatively and accurately measure the accuracy of weather forecasts and outage predictions over a wide range of storm conditions. We believe the only realistic way to do that is to assemble, and then maintain going forward, a complete historical database containing all of a utility’s:

  • Weather forecast data
  • Actual observational data
  • Outage prediction data
  • Actual outage data

Having this database in place, and maintaining it going forward, are the keys to examining critical questions on the accuracy of weather forecast and outage prediction models. Moreover, with this database platform, utilities can begin to examine – for their specific utility service area - the quantitative benefits of bringing in various new weather forecasts, engaging in ensemble forecasting techniques, or possibly scaling the weather forecasts they have to account for special geographical conditions in the utility’s service area that may not be well modeled by the general weather forecasting services.

Moreover, having this database platform in place, will also allow a utility to engage in quantitative evaluations of their current outage prediction models under different storm conditions, and to evaluate, again in a quantitative way, the benefits of various improvements that might be made to the models.

Database
To do this type of quantitative analysis we therefore need to assemble weather data and outage data, both forecast and actual. We need to assemble the forecast data and actual data over a utility’s entire service area for an extended historical period, and incorporate processes for continuing this data assembly process going forward. This is quite a bit of work, and there are a number of tricky issues that will need to be dealt with to normalize, standardize, and summarize all the source data from the various data sources, as well as to put in place a series of statistical calculations for assessing the accuracy of the forecast data.

But once the database is in place, you now have a flexible analysis platform for assessing weather forecast accuracy and outage prediction model accuracy under a variety of different circumstances and conditions, for example: a particular storm event; a particular type of weather condition (e.g. for periods when sustained winds are over 40mph); a specific date period or specific section of the utility’s service area. And as the issues or challenges change, the database platform can easily be engaged to analyze new issues and ideas since all the historical weather and outage forecast and actual data is already pre-loaded in the system. 

Assessing Accuracy of OPM Forecasts
Our specific focus in this paper is on assessing accuracy of the OPM forecasts, and falls into three areas:

  • Validating a utility’s existing OPM model(s)
    The goal here is to create a series of accuracy measures for assessing the utility’s current Outage Prediction Model for each of the qualifying storms over the historical period in the database. With this in hand, the utility will have a quantitative view of the accuracy of their current model. As new qualifying storm events occur, the new OPM forecast and actual data can be added into the system to provide a continuing benchmark of how well the OPM does over time.
     
  • Simulating the performance of the OPM under different weather forecasts
    With both weather and OPM data loaded into the database, a new window of analysis opens up. We all know that the accuracy of weather forecasts translates directly into the accuracy of the outage prediction model, since weather is the predominant input to the outage model. The worse the accuracy of the weather forecast, the worse will be the outage prediction.

    In this second type of analysis we are thinking of, the database platform can be used to devolve the compound inaccuracies of the weather forecast from the outage prediction model. That is, we can run simulations in which we increase the accuracy of the weather forecasts for all or some historical events (or eliminate inaccuracies all-together by changing the forecast weather data to be the same as the actual data). Then we can see how much that improves the outage prediction forecasts. In this way, the database platform can be used to see how accurate the OPM forecasts are intrinsically, aside from the inaccuracies introduced by the weather forecasts.
     
  • Using the database framework to assess new OPM Features
    A key new way this database platform can be used is as a validation framework for testing out the benefits of various possible new enhancements to a utility’s OPM. As new improvements are made to the OPM, we can then use the database framework directly and easily to assess how much improvement each new feature makes to the accuracy of the OPM, as judged over the full historical storm dataset.

    So the database will provide a quantitative benchmark for judging improvements one by one. This gives a utility a facts-based, data-driven way to assess their OPM currently and then to assess the benefits of various improvements made to the model.

Summary
Hopefully we have made the case – namely, that the validation capabilities inherent in building this weather and outage database platform are really an essential and appropriate tool for a utility to quantitatively assess their current weather and outage prediction models and for assessing improvements as well.

While not easy, building this database platform provides a utility with a data-driven and statistics-based framework to really see what the current OPM model delivers, and what different possible enhancements to the OPM system can achieve.
 

About the Authors

Dr. Ron Mueller is CEO and founder of Macrosoft. He is also Macrosoft’s chief scientist, defining and structuring Macrosoft’s path forward on new technologies and products, such as Cloud; Big Data; and AI. Ron has a Ph.D. in Theoretical Physics from New York University, and worked in physics for over a decade at Yale University, The Fusion Energy Institute in Princeton, NJ, and at Argonne National Laboratory. Ron also worked at Bell Laboratories in Murray Hill, NJ, where he managed a group on Big Data, including very early work on neural networks.
 

Jason Singer has been the director of Macrosoft’s Utilities Practice since 2005. Jason manages all aspects of Macrosoft’s utility portfolio including Resources on-Demand, RAMP-UP, Outage Central, and Mine-Weather. Jason works closely with dozens of major utility clients to delivery technology solutions that solve emergency restoration challenges. Jason earned a bachelor’s degree from Rutgers University.