The prime objective for protection engineers is to achieve system reliability by maintaining the physical integrity of installed equipment. In this respect, the desired relay settings allow uninterrupted service during episodes of remote faults but trip breakers for primary faults.
Each relay is capable of providing backup protection for remote faults when primary breakers fail to trip, allowing fault currents to feed from a remote station into the primary fault area. After the fact, the protection engineer reconstructs the system parameters, using symmetrical components and fault-current magnitudes that had been traced by graphical recorders, to determine the location of the fault and to assess the adequacy of the relay settings. It is a laborious and time-consuming job.
With the advent of microprocessor relays, sophisticated electronics built into the relays provide immediate data relative to potential and current transformer (PT/CT) fault magnitudes, fault location performance measurements, the dc system characteristics and circuit breaker status. In addition, communications protocols transmit real-time data to operating and maintenance engineers for immediate rectification of the problem.
The vast amount of power-system data available requires new techniques for extracting relevant information for study. The data are archived in event files for operational analysis.
Pacific Gas & Electric (PG&E, San Francisco, California, U.S.) established event files for transmission and distribution circuits ranging from 12 kV through 500 kV. These files contain data for more than 17,000 events collected over the past 13 years.
The process of data mining (extracting the pertinent information for system protection from the files) provides data for assessing relay settings and applies to any text-based data file, including those in COMTRADE format. A developed software product sorts files on the computer, but a spreadsheet program such as Microsoft Excel performs the analysis. This approach allows an engineer to change the algorithms to fit his or her requirements without requiring the software vendor to make changes.
Initial relay settings follow established guidelines. Now by applying the data mining results, the engineer can reaffirm or modify the initial relay settings.
This analysis process is useful when applied to historical data as well as when used for post-fault analysis immediately after an event. Typically, stored data includes:
PT and CT ratios
Relay threshold settings
Symmetrical components (positive, negative and zero sequence impedances)
Fault type (for example, line-to-ground, line-to-line or three-phase)
Apparent fault resistance.
Analyzing Historical Data
The information available from the archived data provides an opportunity to address questions involving the preferred polarizing method, breaker operating times for tracking breaker performance and statistical analysis of ground faults that were within the ground instantaneous settings. The increasing volume of data collected places an almost insurmountable burden on the protection engineer. This is happening without first changing the method for data retrieval and analysis. Some kind of automated method is required involving either a polling technique or an event-notification process.
Of the two, polling is the simpler method as it is performed on a time basis. This method is most effective for a small system or for a large system broken into many polling subsystems. The interval between the time of the event and the data availability is dependent on the polling interval. The polling interval can be longer than the longest time it would take to poll every device and download the maximum number of events for each device. Thus, the polling interval could be longer than is acceptable for providing timely data.
The event-notification process provides for the fastest time between an event and its data download. Most relays generate an automatic message or logic bit when an event is triggered; therefore, the downloading process occurs immediately following the event. Implementing this system is more complex than it sounds, because the system must continue to monitor or track system operations even during the downloading of events. It must identify multiple events triggered between monitoring intervals and handle a large volume of trigger notifications.
In addition to polling and event-notification, high-speed communications connections to the relays provide the infrastructure to efficiently retrieve large amounts of data. Although not required for data retrieval, these high-speed connections can improve the process.
After downloading the events, analysis focuses on predetermined goals, such as finding the fault contribution for various nodes in the system and determining if the relays operated correctly for the given conditions. No matter how the utility accomplishes the downloading procedure, it can perform the analysis manually using a calculator. Analysis of the data can use computer-aided techniques, using programs such as MathCAD, Microsoft Excel, or software developed by the manufacturer or utility.
Ultimately, a user desires a fully automated system. The PG&E analysis used a fully automated system that dealt with automated segments because of the large volume of data. By using the computer-aided analysis method on a small scale, one event is downloaded and analyzed by the computer. An event is manually “pasted” into an Excel template and the results examined. This template is built prior to the analysis with each template designed for a particular type of relay event. After developing the templates, the automation process is set up. The automation process is simply a program for pasting the event data into the template and consolidating the event files using Excel links to put them into a common file.
This first attempt at mining relevant data from historical microprocessor-based relay event files reaffirmed some of the PG&E practices and ideas. It also revealed that even with overwhelming amounts of data now available, the process of storing, mining and analyzing these data improve protection quality. In the analysis, in most cases the negative sequence voltage polarizing for ground faults was the preferred solution. Furthermore, interrupting times and the difference between the seal operating times and the current dropout times were an unexpected event. The existing policy on setting ground minimum was deemed adequate, even though almost 7% of the ground faults occurred between 1 PU and 2 PU of the ground pickup. This represents an area of concern requiring some future study.
An important discovery with regard to older style relays was that fault identification and location were often incorrect, especially when based on the trip initiation event for a time-delay trip. Therefore, a problem still exists for obtaining fault locations and fault resistance for high-impedance ground faults.
As more relay data are available, new methods of automating the download and analysis processes will be needed to mine only the relevant data. The issue of discriminating among the data available becomes even more important as high-speed communication schemes proliferate and microprocessor-based relay installations increase. Managing the data will become even more burdensome, necessitating a well-planned system that automates data retrieval and analysis for improving protection quality and for improving maintenance practices. Ultimately, service reliability will improve for PG&E customers.
In the course of the study for efficient mining of data, several topics are worth further consideration. These include using statistical data for automated checking programs to flag settings that are outside normal ranges; analyzing archived digital fault recorder data from 500-kV faults using these same statistical methods; tracking individual breaker interrupting times to signal required maintenance; and experimenting with high-speed connection to relays to address security and data-management policies for data available by an Internet connection.
The author would like to acknowledge the assistance of Lawrence C. Gross Jr. in the preparation of this article. Prior to his founding of Relay Application Innovation Inc. (Pullman, Washington, U.S.), Gross worked for PG&E as a transmission system protection engineer and then as an application engineer for Schweitzer Engineering Laboratories Inc.
Scott L. Hayes joined PG&E in 1986 and has held the positions of protection engineer, distribution engineer, operations engineer and supervising electrical technician. Presently, he is the supervising protection engineer in the Sacramento office. Hayes received the BS degree in electrical and electronic engineering from California State University, Sacramento, and is a registered professional engineer in California. He is a member of the IEEE Power Engineering Society and is past chairman of the Sacramento Section of the PES.