The regulatory purview in the United States has shifted from stranded assets and generation to the power grid and distribution reliability. Regulators are becoming increasingly concerned with every issue relating to the delivery of reliable power to customers. One concern is that utility parent or holding companies headquartered outside the operating state are siphoning off funds that could be reinvested in T&D systems to improve reliability. Considering the Enron debacle, as well as the large number of mergers and acquisitions, it's no wonder most regulatory bodies are concerned about utilities' attention to customer service and reliability performance.

The increased regulatory concerns are contributing to a confounding challenge for utilities — enhance reliability reporting while maintaining or improving reliability levels. Recent technology advancements have enabled utilities to more comprehensively report the disturbances on their system, which has created an artificial decline in reliability performance. The technologies have improved data gathering beyond the paper methods of the past, leading to higher levels of reported customer interruptions in frequency and duration.

Regulators, seeing an apparent decline in reliability performance, are reacting with mandates targeted to ensure customer satisfaction and a utility reliability focus. Most regulators want to hold distribution reliability performance constant at pre-reregulation levels or improve reliability performance. They are concerned about the perceived reduction in overall customer service reliability and apparent lack of investment in delivery system infrastructure. Unfortunately, some of the regulatory schemes that have been enacted might be using inappropriate metrics to try to achieve the ultimate end-goal of ensuring reliability at a fair cost to the ratepayer. Each mandate has a cost associated with its implementation. Setting targets too stringently can force utilities to make expenditures on less-than-optimal programs. Setting them too loosely may allow a utility to shirk its customer service reliability responsibilities.

Each state has the right to mandate distribution reliability standards and targets, creating potentially 50 different regulations throughout the United States. Some states have no reliability regulation at all. Today, regulators may participate in the National Association of Regulatory Utility Commissioners (NARUC) where they can share ideas, but no requirement exists to adopt the same approach on any issue. Looking 20 years down the road, it is conceivable that a federal standard could be enacted as has been done in other countries. The U.S. Department of Energy (DOE) has expressed a desire to begin regulating distribution reliability at the federal level. However, it will take years to change the status quo because most states are unwilling to forgo this right, and who can blame them?

State of the Union

The most common metrics used by state regulators include system index calculations SAIFI, SAIDI and CAIDI. To a lesser extent, CAIFI and MAIFI are used. These indices, as well as the factors that affect them, are defined in the IEEE Guide on Electric Power Reliability Indices 1366-2001. In short, they are engineering metrics that track frequency and duration of customer and system interruptions. These indices are applied on system, circuit and customer levels for planning and regulatory reporting purposes. While these metrics provide a reasonable method for utilities to plan their operations, they lack a connection with customer satisfaction. Consistent customer satisfaction data are often difficult to obtain. When it is available, it must be considered at a different confidence level than accounting or reliability data, thereby making it difficult to use for decision-making purposes.

Many states have enacted some form of reliability reporting requirements (Fig. 1). The states shown in yellow have regulation for one company within the state, which arose from merger activity, but they do not generally track reliability. Some states have no formal reporting requirements, while others require minimal annual reporting. Still others are assessing penalties through performance-based rate (PBR) mechanisms. PBR appeared to be the definitive solution for reregulation of the electric delivery system in the mid to late 1990s, when mergers and acquisitions happened almost monthly. However, things have slowed over the past two years, largely as a result of Sept. 11, 2001, the Enron scandal and the downturn in the economy. That, in turn, has slowed some of the reliability-based regulatory activities.

Components of Regulatory Plans

Many state regulatory plans have some common elements, including SAIFI, SAIDI, CAIDI, using defined event exclusions for storms and other disasters, as well as lists of poor performing circuits. In some cases, there are inspection programs, tree trimming mandates and PBR initiatives.

Indices. The most commonly used indices are SAIFI, SAIDI and CAIDI, with most states choosing to use two of the three for reporting and PBR purposes. As stated in IEEE 1366-2001:

CAIDI - SAIDI/SAIFI (1)

Thus, using two of the three indices will provide meaningful information about all three. Most states choose to use SAIFI and either CAIDI or SAIDI. SAIFI and SAIDI tend to provide information on system design issues with CAIDI providing a view on operational performance during restoration.

A few states have added MAIFI to the reporting requirements in an effort to get a handle on how many momentary interruptions customers are experiencing. Unfortunately, this measure does not provide the information most are seeking, especially customers. Momentary interruptions occur when protective devices work to remove a fault from the system. Protective devices are found at all levels of the system. Today, most utilities are only able to gather circuit breaker operations at the substation level, and in some cases they do not have an automated way to collect even that information. Additionally, momentary interruptions are most pertinent to industrial customers that typically make up less than 0.1% of all customers on a system. It is advisable to develop ways to measure their specific performance instead of trying to measure at the system level where collected data do not reflect localized industrial customer performance.

Defined Exclusions. Most regulatory bodies have defined “exclusions” for reporting purposes. These exclusions typically include transmission interruptions, planned interruptions, major events or storms, momentary interruptions and, in at least one case, transformer, secondary and service-related interruptions. Today, definitions vary from state to state, which makes comparison of information virtually impossible. Even if the states used exactly the same definitions, there would still be comparison problems because of differences in interruption collection systems.

The IEEE Working Group on System Design (as of press time) is in the midst of the ballot process for a new version of the Guide for Electric Distribution Reliability Indices (P1366/D14) that describes a new methodology for defining major event days. The guide also clarifies some of the other supporting definitions. Some states are adopting IEEE 1366 as the basis for their regulation to remove definition variability that often makes comparisons difficult.

Poor Performing Circuits. Poor performing circuits are defined in many unique ways by regulatory bodies. They require utilities to review 1%, 3%, 4%, 5% or 10% of “worst” performing circuits. In addition, each commission has chosen a slightly different method for defining worst performing circuits. Some use one index, others a combination of indices — some with exclusions, others without. Most plans require that circuits not be repeat offenders two years in a row and that corrective action plans be submitted.

Most utilities have some circuits that will perform poorly year after year. These are typically long rural circuits with high exposure and few source options. The cost to change reliability performance of such circuits can be high, particularly if a second source is required.

Inspections. Infrequently, facility inspections are included in regulatory plans. However, where they are included, they are often prescriptive. Most notably, California instituted an intensive inspection requirement in its General Order 165. Inspection intervals range from one to five years for most items and 20 years for specific types of poles. Table 1 shows the intervals.

The plans that aren't prescriptive often refer to the National Electrical Safety Code (NESC) as the basis for inspections. The NESC only requires that inspections occur at such intervals as experience has shown necessary.

Tree Trimming. Tree trimming is an issue in many states, largely because trees are one of the top causes of customer interruptions. Regulators often view utilities' tree-trimming expenditures as discretionary O&M spending and believe it is one of the first items to be reduced in economic down times. In addition, it can be difficult to obtain permission to trim in certain historically significant or established picturesque areas. This lack of trimming can result in poor performance during both minor and major storms.

Often, regulations refer to NESC and ANSI A300 (the standard for tree care). The NESC only requires the trimming or removal of trees that may interfere with ungrounded conductors. The rules in Oregon are detailed and describe minimum required clearances for transmission, distribution and secondaries. Virginia also established a detailed guideline that carries no penalties but allows the commission staff to contact a utility if complaints are made. As a result of the January 2000 ice storm in Virginia, the Virginia State Corporation Commission “suggested that Virginia Power had placed too much emphasis on aesthetics and wishes of the property owners at the expense of reliability, and recommended that Virginia Power intensify its tree trimming in order to improve reliability.”

California also has mandated extensive tree trimming initiatives in I.94-06-012, which can be found on its Web site at www.cpuc.ca.gov.

Performance-Based Rates

Most states that have enacted PBR and service quality indices (SQI) with penalties and incentives have done so on a company-by-company basis. Some states give the collected penalties to customers; others put it in escrow to offset performance in subsequent years, while others give it back to the utilities to be spent on reliability programs.

One example of PBR was enacted by the Department of Telecommunications and Energy (DTE; Boston, Massachusetts, U.S.), where a penalty/incentive based SQI program was put in place because there was an open docket on service quality as a result of mergers and acquisitions within the state. Four electric investor-owned utilities operate under this SQI program. While other Massachusetts utilities have a dead-band/penalty SQI, one utility has the additional opportunity to earn incentives. In the case of that utility, the DTE assesses performance based on the metrics given in Table 2.

In the early 1990s, most plans called only for reliability metrics. Today, many PBRs include reliability metrics and customer-service metrics, such as those in Table 2. Texas was one of the first states to take this approach in 1998. The regulation resulted from poor storm performance following a layoff and merger of one of the utilities.

Major Events Defined

In recent years, many regulators, customers and utilities have wanted to compare utility performance. Two of the biggest obstacles for performance comparison are lack of a common major event definition and lack of common data capture systems. IEEE 1366 contains an appendix of storm definitions compiled by the Edison Electric Institute (EEI; Washington, D.C.) in 1999. A common theme is 10% of the customers interrupted for 24 hours. Some commissions apply this approach on an operating area basis while others use the whole company. One regulator uses 15% of the company customers during the event. Others cite a weather service “named storm.” As you can see, most are subjective. With this variation in definitions, it is easy to see why comparability is an issue. IEEE 1366 offers a chance to address the major event definition, but does not answer common data collection issues.

The IEEE WG on System Design decided to develop a method to allow universal comparability. It worked for two years, tested data from 37 utilities ranging in size from 1400 to 5 million customers, and gained consensus on a new approach that removes subjectivity from identification of major event days.

The methodology identifies days in which the utility's operating capability and system design is exceeded. One of the keys to the approach is that there are no exclusions of any interruption data (for example, transmission, planned and all other interruptions are included in the dataset). Unlike all other existing state approaches, this method uses all data to identify both the day-to-day operating conditions and during the crisis mode or major event days. Using the day-to-day performance helps utilities identify areas that truly require attention and thus allows better business decisions. It also can be used for goal setting and trending for regulatory purposes. Remember, there are no exclusions, so the major event days should be reviewed separately to assess performance during that very different operating condition.

Regulators in British Columbia have already used the methodology to assess performance for one of their utilities, despite having a different PBR mechanism in place. Other states are anxiously awaiting final IEEE approval of the document, which is expected in December 2003, before they adopt the approach.

Many regulators are appropriately concerned over distribution reliability performance. A sound approach to lessening this concern is for utilities to work closely with regulators on reliability issues. In the United States, we will have potentially 50 different approaches to reliability regulation for the foreseeable future. Over time, it would be ideal if regulators could develop common metrics and measures that assess performance uniformly and fairly, and attach appropriate penalties/incentives for performance. Absent one approach, utilities that serve multiple states will be forced to make different spending decisions, depending on the regulation for each jurisdiction.

The IEEE has developed a methodology to help remove subjectivity in reliability reporting. Adopting the newly developed approach should assist commissions, customers and utilities with evaluating comparisons, and it will definitely assist utilities with their spending decisions. Customers can feel confident that utilities are making the best possible business decisions regarding reliability. Regulators will be better positioned to set reasonable targets instead of setting them too stringently, thereby forcing utilities to make expenditures on less than optimal programs — or setting them too loosely, thereby potentially allowing a utility to shirk its customer service and reliability responsibilities.

Cheryl A. Warren received the BSEE (1987) and MSEE (1990) degrees from Union College in Schenectady, New York. She has been employed by Central Hudson Gas and Electric Co. (Poughkeepsie, New York); Power Technologies Inc./Stone and Webster (Schenectady); and Navigant Consulting Inc. (Albany, New York). She now works for National Grid USA Service Co. Inc. (Albany) as manager of T&D systems engineering. Her areas of expertise are distribution reliability analysis, power quality, GIS/OMS and enterprisewide IT systems integration. Warren has written/co-written 19 technical papers. She is active in the IEEE and chairs the IEEE Working Group on System Design that wrote the Guide on Electric Power Distribution Reliability Indices 1366-2001 and P1366/D14.
cheryl.warren@us.ngrid.com

Table 1. California Inspection Requirements

Electric Company System Inspection Cycles (Maximum Intervals in Years)

Patrol Detailed Intrusive
Urban Rural Urban Rural Urban Rural
Transformers
Overhead 1 2 5 5 - -
Underground 1 2 3 3 - -
Padmounted 1 2 5 5 - -
Switching/protective devices
Overhead 1 2 5 5 - -
Underground 1 2 3 3 - -
Padmounted 1 2 5 5 - -
Regulators/capacitors
Overhead 1 2 5 5 - -
Underground 1 2 3 3 - -
Padmounted 1 2 5 5 - -
Overhead conductors and cable 1 2 5 5 - -
Streetlighting 1 2 x x - -
Wood poles under 15 years 1 2 x x x x
Wood poles over 15 years that have not had an intrusive inspection 1 2 x x 10 10
Wood poles that have passed the intrusive inspection - - - - 20 20

Interruption Indices

SAIFI (System Average Interruption Frequency Index): How often the average customer's lights are out, measured in times per year.

SAIDI (System Average Interruption Duration Index): The total time without power for the average customer per year, measured in minutes.

CAIDI (Customer Average Interruption Duration Index): How long it takes to restore power on average for the customers interrupted, measured in minutes.

CAIFI (Customer Average Interruption Frequency Index): How often the average customer is interrupted, measured in times per year.

MAIFI (Momentary Average Interruption Frequency Index): The number of momentary interruptions experienced by the average customer per year. Unfortunately, few utilities have the technology in place to accurately measure this index. Additionally, momentaries are most pertinent to industrial customers who typically make up less than 1% of all customers on a system. Thus, measuring at the system level is not helpful.

Table 2. SQI Metrics

Performance Measures Contribution
Safety and Reliability
SAIDI 22.5%
SAIFI 22.5%
Lost work time accidents 10%
Consumer Service and Billing
Telephone answering rate 12.5%
Service appointment rate 12.5%
On-cycle meter readings 10%
Consumer Division Statistics
Consumer division cases 5%
Billing adjustments 5%