April 25, 2024

A Brief History of OMS and DMS – part I

by By: Martin Bass and Bob Fesmire, ABB Inc.
Distribution operations, like those at the transmission level, have undergone great change as a result of advances in information technology as well as external forces such as market restructuring. This article is the first in a two-part series that will examine the development of the two primary information systems that underlie distribution operations—outage management and distribution management—and how they are now merging to form a single platform. Part II will cover the convergence, but before we get to that, it’s helpful to take a look back at how these two important tools came to be where they are today.

Outage Management Systems
The need to respond to outages is as old as power delivery systems themselves. To meet this need, Outage Management Systems (OMS) have been used extensively in the US and other countries to manage distribution systems. However, even well after the computer revolution of the 70’s and 80’s, many distribution systems in the US continued to rely on paper-based systems to identify and track outage patterns. Distribution networks in the US are typically configured radially and cover large distances, making it expensive to monitor the status of the distribution system, particularly outside of the substation. In more densely populated regions, the cost of telemetry and automation could be justified on a cost per customer basis, but for many areas the advantages of automation remained out of reach.

Often the only way that a distribution utility would know there was a problem was when a customer called to report an outage. The utility collected a set of outage calls, and from the pattern of calls received, determined the likely location and cause of the outage. A crew was then sent to the location of the outage to investigate further and affect repairs. Today, of course, reductions in the cost of computing power and the advent of more advanced applications have vastly improved the outage management capabilities of the average utility. But, as any industry veteran can tell you, it was not always so.

Paper-Based Systems
Prior to the introduction of computerized systems, calls were received by the utility and were either written up by hand on a ‘ticket’ or were entered into a computer and then printed. These tickets were often then manually sorted by the circuit on which the given customer was connected and placed into a pigeonhole for further action.

The tickets were then reviewed by experienced ‘analyzers’, who looked at each ticket, determined the electrical location of each customer associated with it, and attempted to identify the root cause of the outage. Printed electrical maps were used to identify the location of each outage.

This worked well in day-to-day operations where the volume of calls was light and the number of outages small. However, during large storms, a high volume of calls would overwhelm the system. In addition, there was a high risk of error (e.g., if the ticket was misfiled), and the analysis was time-consuming. Each customer’s location had to be identified on the paper map and then a prediction had to be made.

In order to improve customer service and reliability, utilities are required by their regulatory agencies to collect customer outage statistics. With a paper-based system, this was an additional burden and the accuracy of the statistics was often difficult to confirm. In many cases, the utility had a paper form to fill out indicating how many customers were affected. This often amounted to guesswork, especially when a wire-down halfway along a feeder was the cause of the outage. As a result, in these paper-based systems, outage durations and customer minutes out were often underestimated.

Early Computer-Based OMS
In order to improve outage prediction accuracy and to reduce the time required to analyze each outage, computer-based outage management systems began to emerge. Initially these were developed in-house by individual utilities. Some of these systems were re-sold to other utilities and their functionality has been augmented through the years.
Two kinds of OMS emerged from this period, each based on a different underlying technology. Connectivity-based systems utilize a database that represents the electrical relationships of the distribution system. These systems hearken back to the mainframe days, and have been in use the longest. Spatial-based systems are newer and rely on geographic information system (GIS) or map-based technology that takes advantage of more powerful modern servers.

Many of the connectivity-based systems use a ‘feeder tree’. This is usually a static description of each feeder and the position of each protective device (e.g., fuse or recloser) in the feeder hierarchy. Customers are assigned to a particular circuit and upstream protective device. When a customer call is received, the system identifies the problem as an outage call on the given circuit and protective device. A single call on a circuit or one occurring below a protective device in the tree is assumed to be a customer-level problem. If more calls are received on the same protective device, the system may automatically ‘roll up’ the calls to the protective device level. This automatic grouping can eventually cause the predicted outage to roll up all the way to the feeder, where a feeder lockout outage will be predicted.

Connectivity-based systems clearly are a significant improvement on paper-based systems. Their accuracy is much higher, and the analysis time is greatly reduced during high call volumes. Outage statistics are also more accurately maintained. The number of customers downstream of each protective device is known, so the number of customer minutes for an outage can be calculated automatically.

On the down side, these systems utilize a tabular user interface that makes it difficult for the user to visualize the outage location. It is also difficult to reflect the effect of any feeder reconfiguration that may have been performed, especially if the reconfiguration is only temporary.

Finally, it is difficult to indicate partial restorations on such a system. Although partial restorations represent a relatively small percentage of all outages, they can have an impact on the overall system reliability indices if they are not counted correctly. A typical case of a partial restoration is where a wire down causes a protective device to operate. The wire down can be made safe and the protective device can be closed in a very short time. However, the customers downstream of the wire down will take longer to restore. Thus some of the customers (upstream) may experience a short outage, while others (downstream) will experience a longer outage. If the restoration step that occurred when the protective device was closed is not accurately recorded, along with the number of customers restored, then the reliability indices will over-count the customer minutes.

More recently, spatial-based outage management systems have been developed. At their simplest level they do not rely on connectivity, but rather allow the actual location of trouble calls to be displayed on a geographical map. This map may have the electrical data displayed as a graphical layer on top of the geographical features. These systems require the user to visually identify the call pattern and some allow the user to draw a polygon around the calls in order to group them into a single outage. The advantage of spatial-based system is that the extent of the outage can be seen quickly and a detailed connectivity model is not needed. This works well in smaller utilities that do not perform extensive feeder reconfiguration and where the connectivity is unknown. However, spatial-based systems do not usually offer accurate customer minutes counts or partial restoration functionality.
Second Generation Outage Management Systems

A new breed of outage management systems has evolved over the past ten years or so. These systems are characterized by a graphical user interface that includes the ability to display one or more feeders at the same time, or even the entire distribution system in a single display known as a world map.

These systems require an accurate and complete connectivity model, from the distribution substation breaker down to the customer transformer (the low voltage side is not usually modeled, in the interests of reducing the overall network model size and the cost-effectiveness of collecting and maintaining this level of detailed data).

Second-generation OMS has two origins: distribution planning software and GIS, both of which could display the huge amounts of data required for distribution systems. Early on, the planning-based systems had an edge with their application software, such as power flows and short circuit analysis, while the GIS-based systems had an advantage with their ability to display the geographical data.

These systems were often server-based software and at the time were pushing the limits of the available technology. Consequently, these systems often slowed down as the number of uses increased (e.g., during storms), which presented a secondary problem in that adding manpower only made things worse.

Over time, the functionality continued to evolve and the hardware capabilities eventually caught up with the demands of the software. However, these systems still had their limits with regard to the maximum number of concurrent graphical users and overall scalability.

Data, Data Everywhere (but not a drop to drink)

One of the challenges facing the second-generation outage management systems was the source of the network connectivity data. Up to this time, there were not many applications that required accurate connectivity data within the utility. Planning applications sometimes had a model, but they did not always include the individual customer connections, instead reducing each lateral to an aggregated load. Legacy OMS would possibly have the feeder trees described earlier, but the exact placement of customers along the feeder was not known. Phasing information was also typically unknown.

Thus a utility looking for an OMS would either be deterred by the high data quality requirements, or would initiate an OMS project that would later fail due to the lack of quality data. Often a would-be OMS buyer would opt for a GIS, since this was a pre-requisite to running a useful OMS, a trend that in part led to the rise of GIS-based OMS.

The selling point for such systems was that there was no need to develop an interface between the GIS and the OMS, since everything utilized the same network model. In practice however, the GIS network model may need to be customized to suit other enterprise applications. Typically the OMS network model is a subset of the GIS model. The OMS model is also the as-operated model, and represents the current state of feeder configurations and temporary devices, such as line cuts and jumper lines. The GIS model is the as-built model and, increasingly, the as-designed model. Thus the GIS and OMS models have different needs and are usually not the same.

Over the years, utilities have cleaned up their GIS data. This effort was driven by the rise in the use of GIS connectivity data, particularly in OMS systems. Utilities recognized the power of the OMS and introduced work flows to ensure ongoing data quality. At the same time, OMS systems began to provide tools built into the software that would rapidly identify data errors and provide tools for temporary correction, prior to it being corrected permanently in the GIS.

Distribution Management Systems

While the evolution from paper-based systems to connectivity-based and most recently spatial-based systems in the OMS arena was taking place, a similar progression was happening to the distribution company’s other IT staple—the distribution management system (DMS).

DMS systems have their roots in transmission SCADA systems. As automation has moved downwards and into distribution substations, there has been an increasing need to provide functionality for distribution applications. Distribution management systems originated as either extensions to the existing transmission SCADA—by adding additional points to cover the feeder breakers—or as a standalone system. Both types of DMS usually have remote terminal units (RTUs), a communication front-end, alarm systems and picture file-based displays.

What distinguishes these systems from their transmission level predecessors is the addition of distribution-specific functionality such as the ability to add temporary devices like line cuts and jumper lines. Since most distribution systems run in a radial configuration, it is often necessary to operate feeder tie switches to reconfigure feeders, either to restore outages or to adjust to different loading situations. This dictates a need to dynamically identify whether and from what direction a given line is being energized. In addition, such systems are nearly always unbalanced, meaning that each electrical phase is operated independently.

Another characteristic of a distribution system is that change is the norm. New residential construction and routine maintenance means that the distribution network model changes frequently. It is not uncommon for 10,000 or even 100,000 changes to occur to a distribution system in a single week. Such changes must be applied incrementally to the DMS while it is up and running.

Finally, the number of status points is very large—in the range of 100,000 to 600,000 for a larger utility. Many of these status points are, however, pseudo points, since there is no telemetry. One of the challenges faced by early distribution management systems was the ability to handle the large number of points and also adapt the picture file displays to the geographic world map displays that second-generation outage management systems supported. These early systems would often connect substation one-line diagrams together into a larger display in order to generate the world map. The maintenance of this display was typically performed within the proprietary display building environment of the DMS. This worked well for smaller utilities, and for utilities that did not already have an enterprise-wide GIS. However, the process could quickly become unwieldy.

To illustrate the magnitude of the challenge, a typical large transmission one-line diagram display would have between 5,000 and 10,000 picture elements. By contrast, a large world map display could have between 1,000,000 and 5,000,000 picture elements. Thus the ability to display world maps presented a technical challenge to the DMS vendors. The ability to maintain world maps represented a large and continuous effort on the part of the utility, particularly if they were using the DMS maintenance tools.

Early DMS displays would skirt around this problem by displaying one or more feeders at a time rather than the entire world map. But this made it difficult for the end user to get a good feel for the overall situation occurring with the distribution system, and they would continue to rely on their wallboard maps with pins to indicate switch positions and tags.

The world map display problem has since been solved by applying new algorithms in the display software and by making use of powerful modern PC’s to perform much of the display processing. Other approaches use server-based technology, similar to that used by web sites to display geographic maps. The second problem can be solved by integrating the DMS with a GIS, or providing a GIS-like data engineering environment within the DMS. The GIS thus becomes the source of the distribution network model and is used to create the world map displays. However, DMS display software is typically not geared towards distribution operations, and is not currently as full-featured as the displays that are present in OMS-type systems.