December 22, 2024

Validating Field Performance of AMI Systems

by Gareth Thompson, Senior Project Engineer, Enspiria Solutions, Inc.
Accurately assessing the performance of Advanced Metering Infrastructure (AMI) and Meter Data Management Systems (MDMS), during a field acceptance test (FAT) and during mass deployment, is a critical need for today’s Smart Grid projects. The phases of FAT and full deployment require mechanisms for validating that the deployed AMI technology performs as expected. Key reasons for field-testing include risk mitigation, business case validation and planning for business process change. Field Acceptance Testing should play an important role in AMI and MDMS deployment. Utilities should use FAT testing as a gate for continued deployment. Moreover, failure to resolve issues discovered during FAT testing should be grounds for halting or terminating continued deployment of an AMI solution. The FAT phase of a Smart Grid/AMI project will consist of deploying a limited (typically 500 – 20,000) set of meters and communication modules across a selected cross section of customer types (i.e., Residential, C&I, Electric, Gas, and Water, as applicable to the utility) and geographical areas representing communication challenges typical of the utilities service territory.

Field Acceptance Testing
Field Acceptance Testing should play an important role in AMI and MDMS deployment. Utilities should use FAT testing as a gate for continued deployment. Moreover, failure to resolve issues discovered during FAT testing should be grounds for halting or terminating continued deployment of an AMI solution.

The FAT phase of a Smart Grid/AMI project will consist of deploying a limited (typically 500 – 20,000) set of meters and communication modules across a selected cross section of customer types (i.e., Residential, C&I, Electric, Gas, and Water, as applicable to the utility) and geographical areas representing communication challenges typical of the utilities service territory.

Including a FAT phase in an AMI deployment project has a number of benefits:

  • Risk Mitigation
    Test the AMI head end (core functionality and integration components), AMI network and AMI meters to find issues early in the game and allow fixes to be made prior to mass deployment. Deployment of AMI meters for a field trial allows installation issues to be identified and resolved.
  • Contract Enforcement
    Testing ensures that the AMI vendor can fulfill the contractual obligations and service level agreement (SLA).
  • Regulatory Reporting
    Real data from FAT makes it easier for the utility to demonstrate to regulators that the AMI system is delivering the planned functionality, performance and savings.
  • System Selection
    The utility’s intent is that, at the end of FAT, results should support the decision to select the trialed AMI technology. FAT results are a key factor in supporting how the technology meets the contractual requirements.
  • Validation of Business Case Benefits and Costs
    Utilities can use the FAT as a proof of concept to prove out benefits and costs traced to the original business case.
  • Business Process Change Management and Planning
    During and after rollout of AMI, numerous business processes within the utility will be affected and changed. Some processes will be enhanced with AMI data, and others will disappear.

During FAT, some tests will fail to execute successfully. This allows issues to be identified and resolved while working on a smaller scale before they become large-scale problems. Finding these issues after many AMI meters have been deployed can present huge logistical and financial issues, damage the reputation of those involved and potentially derail the project indefinitely.

Field Performance Metrics
Accurately assessing AMI and MDMS performance requires testing and reporting on three key metrics: Availability, Accuracy, and Events/Alarms.

Availability
Utilities expect an AMI solution to provide a high percentage of data on a daily basis – this is a key decision factor when selecting an AMI solution. However, simply trusting that the solution delivers what is in the contract is not adequate;
delivery should be verified and validated. Availability testing measures and reports the availability of the AMI data – to ensure that the data that is expected to be delivered by the AMI system, and subsequently to any downstream MDMS and/or customer information system for billing – occurs according to the SLA levels agreed to by the utility and the AMI vendor.

Types of availability reports include the following:
•    Register read availability
•    Interval read availability
•    Time-of-use (TOU) availability
•    Demand and coincident demand read availability

All of these reports should verify that the expected data is delivered for each data type for a 24-hour period by a given time of day such that it can be used appropriately in other systems. Additionally, reports showing meters that repeatedly fail to communicate data, or meter types that have a high rate of communication failure, are useful in identifying communications, hardware and software issues across the AMI system. Differentiation by customer type is important because a small percentage of commercial and industrial meters accounts for a larger percentage of utility revenue.

While an AMI deployment as a whole might meet availability requirements, a more granular review of data by customer type may identify less ubiquitous but significantly impactful issues. AMI solution performance can also vary by commodity type. While electric interval data, for example, may be available at an expected level, gas interval data may lag. Understanding this differentiation allows utilities and AMI solution providers to troubleshoot functionality issues.

External systems such as Outage Management Systems (OMS) and Customer Information Systems (CIS) depend on timely real-time data from the AMI system for power status verification during storms. Automated tests should be made available to report on round-trip times of the initial request being made to the final response. This ensures that the AMI network and technology can support the time based needs of other systems when OMS operators and Customer Service Representatives are fielding customer issues.

The same concept can be applied to On-Demand Read calls to the AMI meters to ensure timely real-time reads all the way to the meter and back to the calling system. An example of an interval availability report is shown in Figure 1, produced by Enspiria Solutions’ Metrics Tool. The report is calculating the percentage of electric AMI meters that reported all expected intervals for the previous 24-hour period (midnight-to-midnight). The report calculation associated with Figure 1 is: The count of electric meters that correctly reported all expected intervals (i.e., 24 intervals for hourly configured meters; 96 intervals for 15 minute configured meters) divided by the total number of electric meters configured in the AMI network, multiplied by 100.

Accuracy
Available data that is not accurate is useless to a utility. Accuracy reporting measures and calibrates the accuracy of the AMI usage data against the utility’s existing automated and manual meter reading systems to ensure that the AMI meters are accurately recording data and the AMI system is accurately storing and passing that data to external systems (such as MDMS and then to billing). This becomes an additional data check on the AMI system to support the availability reports. Even if the AMI system is meeting the correct availability percentage levels it must also be accurate to ensure that customers can be correctly billed for their usage.

Accuracy reporting compares time based AMI interval data with time stamped manual meter readings (utilities should continue manual reading until AMI data has proved to be accurate). AMI data is considered accurate if the manual reading is greater than the AMI interval reading immediately prior and less than the AMI interval reading immediately following. The report process creates upper and lower bounding values from register and interval reads from the AMI system and checks that manually recorded reads lie between those bounds. Accuracy reporting can also be used to ensure the accuracy of the AMI data being translated through MDMS to ensure it is accurate relative to the utility manual reads and the original AMI reads from the AMI system.

Events/Alarms
This metric assesses meter level alarms and events to ensure that AMI meters installed in the field are functioning correctly and real-time events such as power outages can be correlated to real world events. This assessment is important, as many
business processes will be affected by a full deployment of Smart Metering. For example, these alarms assist in the validation of the business plan, which typically includes items such as voltage management, outage management and theft detection.

Automated Testing and Reporting
Testing and reporting of these metrics can be handled via manual processes using the AMI head-end system tools. However, it is difficult to do this on more than a small set of meters. Using the interval availability example from above, it is easy to see how checking thousands of meters to ensure they all return the expected numbers of intervals each day via a manual process would be very time consuming.

The use of automated tools for these tests allows the utility to perform the tests daily (or even more often, if required) in a controlled and repeatable fashion for a large set of meters. Data may be collected from the head-end, analysis of the data performed and then performance metrics reports created for analysis by the AMI team. Automated tools can also be used to determine that the integrity of the data flowing from AMI to MDMS is being maintained, and to validate that the data being passed from MDMS to the billing system is intact.

Dashboards may be used to report daily results and more detailed reports distributed automatically via email to the AMI team. The metrics reports can be used by the AMI team and AMI vendor to pinpoint sets and/or types of meters that consistently fail the performance criteria. The analysis allows the combined teams to drive out any issues with meter, network configuration and software technologies as well as physical and geographical issues in the AMI deployment area(s).

An example of this analysis is identifying if specific meter types, or meter types with a certain firmware version, repeatedly fail the tests – which could point to issues with the meter software. Another example is where meters located in a specific geographical area fail the tests due to network coverage, indicating that more AMI network or infrastructure should be deployed in that area.

An automated solution that evaluates AMI data should provide audit quality reports of system performance and contractual metrics, in addition to being quick and easy to use. Utilities benefit from a solution that provides both executive-level dashboards for monitoring progress, and detailed report information in a mainstream format that can be used for analysis and troubleshooting. There are several options to consider when looking for automated metrics tools:

  • In House Tools
    These may require ground-up construction to support reporting around the AMI and MDMS technologies and databases.
  • AMI Systems
    The AMI head-end itself may provide some tools and reporting functionality; however, the utility often requires an independent assessment of the AMI technology.
  • MDMS
    The MDMS provides reporting functionality but the timeline of MDMS implementation is often in parallel with the initial AMI FAT timeline.
  • Third-party Tools
    Analysis and reporting tools can enable the utility team to produce objective reports on the AMI data while also insuring that the tool can be implemented in a rapid and straightforward manner without requiring excessive customization or configuration.

Test Playbook
A test ‘playbook’ should be developed in the initial phases of the FAT, prior to actual testing. The playbook lists all of the tests – automated and manual, often broken down by meter type – to take place during FAT. The playbook needs to be based on contractual/SLA requirements, for example to ensure that 99.5% of interval reads are being delivered to the head-end system within eight hours of the end of the day. The playbook needs to be developed jointly with the AMI vendor, and the utility and AMI vendor should both agree to be bound by the results.

The FAT data can then be utilized to facilitate system acceptance – the utility shouldn’t formally accept an AMI deployment without validating key criteria. The end of FAT can be used as a contract gate for continuing with mass AMI deployment.

Meter Shop and Trailer Testing
Field acceptance testing allows for repeatable availability and accuracy testing across the entire meter population; however, there are other meter and AMI system functions that should be tested in a controlled environment on a more limited basis. These tests can be performed in the meter shop and in a field located trailer.

A ‘trailer’ test involves placing a trailer containing AMI meters in a field location with AMI network coverage to simulate a set of AMI meters deployed in the field. The trailer can be placed in a location typical of the utility’s service territory or even moved around to atypical locations. The meter shop and trailer tests are constructed around a bank or bench of meters that are subject to manually triggered tests to confirm that meter functionality is working as expected.

The testing playbook is used by the AMI team to execute the tests. During the test process, the meters are subject to load/flow, events and other conditions while an AMI engineer and the AMI team monitors network traffic and the AMI system and head-end. The AMI team checks that the resulting data in the AMI head-end is as expected, based on an individual test. If the test does not function as expected, moni­toring and recording the meter and network data aids in the triage of any issues.

An example of this type of test would be to reverse the meter in the socket to ensure that a reverse rotation event is sent back to the AMI head-end. If the event is received at the AMI head-end, the meter passes the test; if not then the test fails and analysis is performed to determine the cause of the failure. Based on the data output, the testing playbook can be used to record the results of the tests. It is important to involve the utility employees, AMI engineers and any relevant integration engineers in this phase to validate the required functionality and output of the AMI system.

Performance Management for Full Deployment
While the automated testing is important in the initial testing and FAT phases of an AMI implementation, it is also useful in the full AMI meter deployment phase. Testing should continue throughout full deployment to verify system health and performance. Additional analysis and reporting tools should be added to expand on the information provided by the basic FAT reporting results to support contract enforcement and regulatory reporting.

During full deployment, testing can be performed on the full population of meters deployed to continue to monitor performance against the SLAs. Typically this phase will leverage other Business Intelligence (BI) tools to allow more complex analyses to be performed in the AMI dataset and with other datasets. (See BI & Spatial Analysis section, below).

As part of deployment, testing can also be perfor­med on a subset of the AMI meters (i.e., a different subset than that used in the FAT phase) to ensure that the FAT metrics achieved earlier are repeatable in a consistent manner. This ensures that any tuning performed for the FAT meter popu­lation and AMI infrastructure was repeated for newly deployed areas and not specially tuned just to pass SLA levels for FAT.

Using the existing testing tools and reports in this phase gives a larger return on investment on the tools built for the FAT phase. The FAT reporting tools can provide a blueprint for the full dep­loy­ment phase and for normal operational reporting once full deploy­ment is achieved.

BI & Spatial Analysis
Leveraging spatial data sources with AMI data in the various phases of Smart Grid planning and deployment can provide a foundation for powerful data analyses, including for FAT planning, deployment planning, and deployment and operational analysis.

  • FAT Planning
    Potential areas for FAT deployment of AMI meters can be identified using spatial data sources to ‘score’ zip codes, or other areas, based on socio-economic and physical attributes such as meter density, topology, high customer turnover, theft occurrences and rate class. Different attributes may be assigned different weights in the scoring, based on their level of importance to the utility. For example, zip codes with high scores may be more likely to be selected for FAT deployment, as they are more likely to provide a diverse set of conditions to test and validate the AMI technology.
  • Deployment Planning
    This is similar to the FAT planning but the objective in this case is full deployment planning. Zip codes or other areas can be ranked to plan deployment sequences for AMI. Additional scoring factors may be used during the deployment planning; some examples are high cost to read meters and new customer locations. Certain factors may be assigned heavier weighting since it may be more important to deploy AMI meters to specific areas to get a faster return on AMI investment.
  • Deployment and Operational Analysis
    Tabular results from the automated AMI deployment tests can be used against spatial data such as meter reading routes, operational divisions and AMI network assets (i.e., towers, collectors and routers) to show availability of meter reads at a geographical level. The spatial data can be thematically displayed to show availability percentages using various map base tools such as Google Earth or Microsoft Virtual Earth.

Figure 2 shows an example dash­board from Enspiria Solutions’ Metrics Tool, charting meter power outages over a 7-day period. The map view shows the outages plotted against meter reading polygons. The meter reading routes are thema­tically colored, based on the counts of meter outages within each area. The same routes can be displayed thematically for both availability and accuracy reports.

Assessing the performance of AMI and MDMS technology is critical both during field acceptance testing and during mass deployment. These needs can be met through automated testing and reporting tools that can also leverage spatial data sources to provide powerful data analysis, including for FAT planning, deployment planning, and deployment and operational analysis. Effective performance testing and reporting helps utilities to maximize their significant AMI/MDMS investments and reap tangible benefits over the life of the deployment.

Conclusion
Before a utility embarks on the deployment of new networks and the associated thousands of meters per day, it is best to follow a carefully developed process to ensure a successful and risk mitigated project. A Field Acceptance Test (FAT) should be factored into the timeline and budget of the AMI project lifecycle, with much consideration placed on the processes and the goals of the FAT. Notably, the utility should be prepared to delay or stop the project if these goals are not met.

About the Author
Gareth Thompson is a Senior Pro­ject Engineer at Enspiria Solutions, with 15 years of experience in utility consulting and software engineering/integration. He specia­lizes in assisting utilities with the field-testing of AMI/MDMS technology. Mr. Thompson helps uti­lities to define and execute detailed plans for AMI benefits validation and risk mitigation, and deploys metering metrics tools and web-based dashboards.