November 21, 2024

A Brief History of OMS and DMS – part II

by By: Martin Bass and Bob Fesmire, ABB Inc.
This article is the second in a two-part series on the evolution of Outage and Distribution Management Systems. Part I covered the development of OMS and DMS from manual, paper-based processes to streamlined operations managed with highly specialized information systems. In part II, we examine the current status of these two key components, with a view toward their eventual convergence.

Third-Generation OMS
By the mid-90’s, Outage Management Systems had evolved from trouble ticket systems to sophisticated computer programs that provided intuitive graphical displays of the distribution system. Continuing advances in computing power and OMS capabilities have produced a third generation of systems that are capable of handling enormous call volumes. Indeed, utilities today are increasing the size of the pipe that feeds the OMS with staffed call centers, automated Interactive Voice Response (IVR) systems, and third-party high call volume services. The result is that OMS call handling capacity has skyrocketed.

The recent advances in call volume are matched by the ability to analyze and group trouble calls together. The grouped calls are then sent to the graphical user interface that presents not only the location of the individual calls, but more importantly, the results of the analysis. The third generation OMS includes the capability to represent these large call volumes in a geographic display in real time, and to provide the same information to a large number of users simultaneously.

Fat Client or Thin?
To overcome performance and scaling issues in second-generation OMS systems, new client/server architectures were applied in developing the latest family of programs. The essential question here comes down to how much processing is performed by the server and how much is handled by the user’s local PC, or client. A bias toward local processing is known as a “fat client” approach, whereas systems that perform relatively little at the PC level are known as “thin client”.

With the advent of more powerful PCs and the emergence of 64-bit processing power, some vendors elected to implement a software architecture that would take advantage of these advances. In a fat client architecture, the network model is maintained locally in the client PC. This means that it can perform much of the processing that would otherwise be performed on the server.

Consider the example of dynamic line coloring in an OMS world map, where the energization status of a feeder is shown according to color. When the feeder is energized, it is colored red, for example. When the feeder circuit breaker is opened, the display will indicate that it is de-energized by changing the line color to white. This means that in the user’s display, every line segment downstream of the open breaker must have its color changed too.

With a fat client solution, the server tells each client that the feeder circuit breaker has opened, using very little communications bandwidth. From there, the client can color all of the downstream line segments on its own, because it has a complete copy of the network model stored in local memory. With a large number of clients and a large number of network operations, this approach reduces the server load and required communications bandwidth considerably. Therefore it is possible to add additional client workstations with minimal additional loading on the server, making the system highly scalable.

As the size of the network model increases, the time to initialize the memory copy of the network model also increases. Loading the model also takes time, especially since there may be many workstations all initializing their memory at the same time. This issue has been solved in a
number of ways, typically by caching the data on a local disk on the client PC, and by using data compression techniques.

In a “thin client” approach, the server retains the network model and performs most of the calculations. In our line coloring example, the server does not tell the clients that the feeder circuit breaker has opened—it has to tell each client to recolor every line segment, which typically means greater communications traffic.

This approach is best suited to applications that do not require much local intelligence and where response time greater than one second is acceptable. Techniques for reducing the network traffic in a thin client system rely on the server only sending information to the client as needed. De-cluttering techniques are also used to limit the amount of information shown on a given display. As the user zooms in, more detailed information is sent to the client to cover the area of interest.

Thin client OMS is good for users interested in basic information, who do not need a real-time response and do not need to perform CPU-intensive functions that would tie up the server. Typical users in this category include utility executives, customer service representatives and possibly certain critical customers who need to know the current status of their outages. An additional benefit of the thin client approach is that special application software does not need to be installed on the PC, so maintenance of the application is easier. The end user only needs access to a web browser.

The local copy of the distribution network model used in fat client OMS offers an added benefit in that it enables advanced DMS functions to be performed locally on the client machine. Applications like load flows and short circuit analysis require substantial processing resources, given the size of typical distribution networks. In a fat client model, these applications can be run on individual client PCs, allowing processes to take place in parallel and greatly reducing communications traffic and processing time.

The Melding of DMS and OMS
DMS and OMS systems have continued to evolve on their own, but as the preceding example shows, there is also a cross-pollination taking place. Eventually, these two systems may converge in a single platform from which the distribution utility handles all its day-to-day operations. Following is a survey of several functional areas in which we explore this possibility.

Automation and Real-time Data Collection
While the DMS has a native data acquisition function, based on RTUs, the OMS has had to rely on data received via an EMS or distribution SCADA system.

However, the march down the feeder continues. As the benefits of telemetering at the substation level are proven with the DMS, efforts to add automated switches and telemetry further down the feeder have intensified among many utilities. This trend has accelerated with improvements in
communications technologies, the availability of data concentrators at the substation, and the use of standardized protocols such as DNP3 and Modbus.

The benefits of remote controlled switches along the feeder are clear. Feeders may be reconfigured in real time to adjust to changes in loading throughout the day. The remote switches also report status and flow measurements to support switching decisions.

Automation is also improving OMS systems, which increasingly are using data from automated meter reading (AMR) systems. The main focus for automated meter reading systems is obviously to reduce the cost to read individual meters. Though the cost of 100% coverage can be steep, many utilities have implemented pilot programs that can be leveraged to great effect in an OMS context.

With a reasonable coverage of automated meters, the OMS can use these meters as addressable status monitors in order to detect outages and to confirm restoration. The ability to confirm restoration is especially valuable. This is because there may be several outages at the same location. The crew thinks that they have fixed the problem, only to find it was masking another (so-called nested outages). The AMR system allows the OMS to query strategic meters downstream of the original outage and report to the crew if there are additional problems in the vicinity. The time saved can be significant, and shows up in reductions to both crew time on site, and customer time without service.

Display Evolution
As we covered in Part I, outage management systems took the initial lead in developing graphical user interfaces, but in recent years additional capabilities and functions have been added to DMS displays. Improvements in the world map graphics are evident and the gap is closing between the OMS and DMS user interfaces.

Now, some OMS systems are being delivered with the capability to generate schematic diagrams of multiple feeders on the fly. The idea is not new, and many systems are capable of generating single feeder diagrams that typically look like a long line with multiple short taps emanating from one side (often referred to as ‘stick diagrams’). What is new is the ability to show 20-50 feeders around 4-5 substations in a single schematic diagram. The trick is also to ensure that the diagram looks similar each time it is generated, even after changes in the network topology. Such a tool allows feeders to be reconfigured more effectively and switching plans to be written more readily. It also reduces or eliminates the need to use paper maps for switching purposes, further reducing costs.

Unbalanced Load Flow Calculations
An important tool in the decision making process for distribution utilities is the use of unbalanced load flow calculations. To date, the capability to perform these calculations has not been included in DMS or OMS, but utilities are demanding it now for a variety of reasons.

Often the distribution system was designed to have an even distribution of load across all three phases, but as the system grew, so did the level of imbalance. This could be either because the
construction crew did not build what was designed, or the drawings for the phasing were not maintained accurately, sometimes making new design a matter of guesswork. Phase imbalance can also result from very high growth along a particular feeder.

Attempts to correct phase imbalance are expensive and can sometimes make matters worse unless the source data is accurate. However, the source data must be corrected in order to obtain accurate results from load flow calculations. The data requirements for unbalanced load flow calculations are higher than for balanced load flow calculations, since the size of conductors and, for overhead construction, their positioning on the pole are important inputs to the overall calculation.

Unbalanced load flow has been available in distribution planning applications for many years and it is beginning to appear in OMS. The same function, adapted from transmission applications, is also finding its way into DMS.

Short Circuit Analysis and Fault Location/Restoration
Short circuit analysis applications are also being used in both the DMS and OMS. Again the DMS is evolving from a pure calculation and a balanced model, to an unbalanced model. The OMS meanwhile is providing a user-friendly way to present short circuit analysis results in order to perform fault location functions. The idea is to use the fault current from an intelligent relay in conjunction with a connectivity model to locate the fault. The fault current and conductor impedances are used as input to a short circuit analysis. The large amounts of data produced by this process can be summarized and presented to the user as a number of “candidate” fault locations on the OMS world map.

Both DMS and OMS have combined the short circuit analysis function with the information provided by fault indicator devices to yield a more accurate fault location. The maximum benefit is gained from fault indicator devices when they are telemetered. In this case, fault indicators can provide a signal that shows whether a fault passed through the conductor that they are monitoring. Thus, if the short circuit analysis indicates several possible branches, all of which give the same fault current solution, then the fault indicators can narrow the solution down to a specific branch.

Used in conjunction with fault location, the isolation and restoration analysis functions can automatically determine the best way to reconfigure the feeder in order to first isolate the fault and then back feed and restore the largest number of customers. The function will automatically look at every combination of switches that can be operated, and perform a load flow analysis for each combination. The results are presented to the user, who can then request that a switching plan be generated from the selected solution. This function has existed in DMS for some time, and is now
available in some OMS systems.

Study Mode and Simulation Mode
Both OMS and DMS currently have the capability to create a snapshot of the distribution system for performing what-if analysis. These typically are limited in the number of study cases that each user is allowed to have and can take several minutes to initialize. However, some OMS systems have the ability to run a real-time simulation mode. This allows the user to select a subset of the network model and create a memory copy of it in real time. The user can then perform any what-if studies—creating switching plans, running load flows, etc.—before executing them in the actual system. This allows the user to make more accurate decisions in a shorter time.

Looking Ahead
Both DMS and OMS are changing very rapidly, as utilities strive to make efficient use of their distribution resources. As we have seen in the foregoing examples, both systems are expanding their influence in distribution operations and utilities are relying on them more and more for decision support. An aging population within the distribution utility control room—and the lack of experienced people to replace them—will further increase reliance on such tools. As software design and the underlying technologies advance to meet the needs of distribution operations, it will be interesting to see what the next generation of OMS/DMS will bring. If the convergence currently going on in other fields is any indication, a unified system for the distribution utility may not be far off.