When a power system disturbance occurs, several questions come to mind: What caused the event to happen? When did the event happen? Where did the event happen? How big was the impact? In the past, these questions were difficult to answer because of limited monitoring resources, analytical resources and time.

For many years, the analysis of power system disturbances has been a critical part of understanding phenomena that can impact the power system and how the power system responds. However, this analysis traditionally has occurred after the fact and involved significant effort in organizing and correlating limited monitoring and equipment performance information.

More recently, the ubiquitous availability of monitoring information from many different sources as well as advancements in communication infrastructure have made it possible for the analysis of a much-larger percentage of system disturbances and even the potential for real-time analysis.

Traditional Data Handling

While intelligent substation devices have become commonplace across the utility, there has never been a concentrated effort to analyze all of the data from these disparate devices — digital fault recorders (DFRs), power-quality monitors, revenue meters and relays — in a common platform. With no common analytical tool, it is difficult to specify a standard way to visualize the different data streams. And with no standard way to visualize data, there is no easy way to aggregate data from hundreds of instruments in such a way that information is easily communicated.

Traditionally, the data from these intelligent devices are purposely separated. Names like “real time,” “operational data” and “nonoperational data” are assigned to further isolate the data into organizational silos. Even the metrics generated from these data sources are typically categorized into groups defining either system reliability or end-use power quality. However, events on the system transcend these familiar boundaries. Consequently, a new paradigm must be employed in the monitoring system of the future.

There are many natural consequences to handling data in the traditional manner. Since the data exists outside of a common platform, there are many analysis tools. Analysis is relegated to intensive manual effort. If automation is even possible, it is often system-to-system integration. This form of integration is expensive, insecure and does not scale.

Scalability is perhaps the most important factor in the monitoring system of the future. For instance, as smart meters are installed at the customer interface to the utility system, utilities must prepare for what some are calling a “data tsunami.”

A Data Future

To illustrate the matter of scale, consider a typical substation; there may be hundreds of devices to completely monitor the health of the assets within the station. Next, consider the thousands of customers fed from a single substation. Finally, consider the millions of potential smart devices in the ordinary home: thermostats, breaker panels, outlets and appliances. In the monitoring system of the future, these devices will all exchange information through high-speed, two-way communication infrastructures. The exchange of this data will occur in near real time, thus blurring the lines between operational data and nonoperational data.

At this point, one might be thinking this is just more of that smart grid hyperbole. To some extent, that is correct. At times, it seems like the smart grid is represented as an item that can be purchased at a store. Often, the implication is that if a utility chooses not to buy the item, then it somehow has a dumb grid. However, smart grid is about employing modern computing design philosophy and technology to enable more efficient, safe and reliable operations.

Managing and taking advantage of the tremendous amount of information available about conditions on the power system is definitely a critical aspect of the smart grid, and it is a real opportunity.

An Integrated System

The monitoring system of the future relies on the notion that the data collection and data management infrastructure operates as a cohesive system. The idea is that the whole is greater than the sum of the parts. The data must move from many individual tools to an integration platform. The platform enables the data sources to automatically work together, providing actionable information, situational awareness or expert system analysis. In other words, through the platform the data becomes information.

So, how do all these pieces fit together? First of all, the device or the collection software for the device must support a standard data format in order to facilitate sharing information between applications. In the case of power-quality monitors, there is the IEEE 1159.3 (PQDIF) file format. Similarly, most DFRs support the IEEE 37.111 (COMTRADE) file format. These published formats allow manufacturers to interchange trend and disturbance information in a standard way. Though necessary, simply being able to exchange data does not meet all of the requirements of an integrated system.

Once all of the data is collected, standards like the IEC 61970 common information model (CIM) enable the data to be described in a consistent fashion. For example, each substation device uses a different method to describe voltage. However, CIM provides the vocabulary to ensure voltage is described completely and consistently for each device on the power system. CIM not only describes devices on the power system, but it also describes other business systems like geographical information systems (GIS).

Having all of the business systems mapped onto a common enterprise bus should enable analysis that crosscuts many functional areas. Consequently, applications ranging from fault location to relay performance to wind farm interface issues could be developed. These applications should result in improvements to the reliability and security of the power system. They also should benefit everyone by providing actionable information with respect to asset life cycle, operations, work force, planning and even consumers.

Automatic and Actionable Information

In each of the pieces of the hypothetical scenario, the modules are using components from across the organization to provide situational awareness to decision makers. The information is actionable, meaning the people receiving tasks get the specific information they need to do their jobs. Finally, the system is automatic; no one has to perform a manual calculation to get a fault location or an equipment condition assessment. In addition, no one has to retrieve data manually from field devices.

Fortunately, many of the components for such a system are available today. Power-quality monitors and other disturbance recorders can use tools like the Electric Power Research Institute's PQView to aggregate data into a system event database. Other tools like ESRI's ArcGIS Server can provide automated GIS analysis using service-oriented architecture. Still other tools exist to integrate work and asset management data in the enterprise.

There are numerous tangible benefits to this sort of monitoring system: improved system performance, reduced cost, optimized work force, improved system health and improved customer satisfaction, to name a few.

There are also some less tangible benefits such as maximizing the value of the data in the system. These benefits are only realized through data integration. Data integration is best achieved by effectively using standards like CIM so vendors can build to a common platform. Look for more to come as these standards are further refined — recent National Institute of Standards and Technology initiatives are helping to move this forward — and interoperability tests provide the means of verifying the compatibility of different components of the overall system.

Companies mentioned in this article:

Electric Power Research Institute www.epri.com

ESRI www.esri.com

Tennessee Valley Authority www.tva.gov

Theo Laughner (tllaughner@tva.gov) is a power-quality specialist for the Transmission, Operations and Maintenance department with the Tennessee Valley Authority in Chattanooga, Tennessee, U.S. He is responsible for developing and administering the power-quality monitoring systems installed throughout the TVA power system.

Bruce E. Rogers (berogers@tva.gov) is a program manager for the Environmental Policy, Science and Technology Group with the Tennessee Valley Authority in Chattanooga, Tennessee, U.S. He is responsible for the research, development and deployment of innovative technologies that improve the operating efficiency and reliability of TVA's power delivery system.

Fred Elmendorf (flelmend@hotmail.com) retired from Tennessee Valley Authority in November 2009 after 30 years. He served as power-quality manager for Power System Operations, responsible for all long-term power-quality monitoring projects within TVA, lightning data systems and the integration of other data sources. He has been an active PQ advisor for EPRI research projects, is a member of IEEE and has a BS degree in computer science from the University of Tennessee at Chattanooga.

Mark McGranaghan (MMcGranaghan@epri.com) is a director in the EPRI Power Delivery and Utilization sector. His research areas include overhead and underground distribution, advanced distribution automation, intelligrid and power quality. Research priorities include developing the technologies, application guidelines, interoperability approaches and standards for implementing the smart grid infrastructure that will be the basis of automation, higher efficiency, improved reliability, and integration of distributed resources and demand response.

How the Monitoring System of the Future Works

A storm comes through the service territory. A breaker operation locks out a transmission line to a distribution substation. Later, the backup transmission line also locks out. The story ends in one of two ways: either numerous customers are in the dark or only the electric utility knows the disturbance ever occurred.

In pursuing the optimistic second ending, the sequence of events looks something like this: The breaker publishes a message to the bus indicating it has opened. The digital fault recorder (DFR) logs a disturbance record and also publishes a message to the bus. Numerous power-quality monitors log the voltage sag along with associated current and voltage waveforms. A weather database publishes recent lightning activity to the bus.

An advanced fault-location algorithm that subscribes to breaker messages, the system model, the DFR disturbance records and the lightning database then calculates the event location. The algorithm does not find lightning near the line at the time of the event and publishes results to the bus.

A system operator interface module subscribes to the fault-location algorithm output along with information about crew locations and the work management system. The information is presented in a coordinated manner with the geographic information system (GIS) so it can be used in conjunction with information from outage management systems (primarily for distribution-related events but also could be important for transmission events). The operator uses this information to identify the closest crew to the failed line that is not already assigned a work order. The module creates a work order and text messages to the crew foreman.

Similarly, a breaker health module that subscribes to the breaker messages, the asset management system and the work management system uses an expert system to determine that the breaker is in need of maintenance. The analysis is based on the number of operations, timing of pole closing and opening operations, fault current characteristics and other factors. Automatically, a work order is sent to the breaker maintenance personnel requesting preventive maintenance.

Meanwhile, a customer alert module that subscribes to the power-quality information, the customer information system, the work management system and the GIS calculates the area of impact by aggregating the voltage information from the power-quality monitors. Next, the module determines the customers impacted by the event using the area of impact and the customer information data. Finally, the module notifies the customer service manager for the impacted customers with the estimated time to repair.