Any utility veteran can attest to complaints by certain key accounts claiming reliability problems are costing him or her millions of dollars per year. However, those same customers balk at paying for a solution that would cost them a fraction of this amount. These situations are symptomatic of a larger problem: reliability targets. On one hand, customer complaints seem to indicate reliability is too low. On the other hand, the unwillingness of customers to pay for improvements implies reliability is too high. How then should reliability targets be set to ensure appropriate levels?
For now, distribution systems are under the jurisdiction of state regulatory agencies, giving commissioners the ultimate power to set reliability targets, although many have not. In the future, retail wheeling and new technologies may allow the marketplace to dictate reliability. Meanwhile, every utility has adopted its own reliability goals, which begs the question, are these reliability targets too high or too low?
Reliability indices are a measure of average customer reliability. Nearly every utility computes them on an annual basis and many are starting to set reliability targets based on benchmark data. Table 1 gives an example of benchmark data, which is based on a 1995 IEEE survey.
Benchmark targets, such as “achieving top-quartile reliability,” often are too high. The implicit assumption is that top-quartile utilities are doing the best job, though this may not be the case. Utilities with high load density, underground systems, sparse vegetation or mild weather often achieve top-quartile performance. Utilities reporting high levels of reliability may be using manual-outage reporting that does not capture interruption data as comprehensively as an automated outage management system. Some utilities also exclude scheduled outages and bulk power events. Even if they do account for these differences, top-quartile benchmarks are probably higher than most customers would be willing to pay if given the choice.
Dan Kowalewski, ComEd's director of reliability and power quality, recognizes the difficulties associated with benchmark targets. Although ComEd has set internal SAIFI and CAIDI targets based on Theodore Barry & Associates benchmark data, Kowalewski states, “Customers are not interested in reliability indices, they are interested in specific problems within their own neighborhoods. These pockets of reliability problems tend to be associated with multiple interruptions and/or lengthy interruptions. Attacking these problems has proven to be an effective way of improving customer satisfaction. As such, ComEd supplements reliability index targets with a formal program targeting reliability improvements for customers experiencing four or more interruptions per year and/or customers experiencing interruptions lasting more than four hours.”
(minutes per year)
|Average of top 25%||0.90||54||55|
|Average of 50%-75%||1.10||90||76|
|Average of 25%-50%||1.45||138||108|
|Average of bottom 25%||3.90||423||197|
|1 SAIFI (System Average Interruption Frequency Index) — the average number of interruptions experienced by customers per year.|
|2 SAIDI (System Average Interruption Duration Index) — the average number of interruption minutes experienced by customers per year.|
|3 CAIDI (Customer Average Interruption Duration Index) — the average duration of an interruption, equal to SAIDI divided by SAIFI.|
Value-based reliability targets attempt to minimize the total societal cost of reliability, including a utility's cost to provide reliability and its customers' cost because of poor reliability (Fig. 1). This concept is appropriate for publicly owned utilities such as municipals and cooperatives. Bob Fletcher, principal engineer with Snohomish County Public Utility District #1 in Washington state, explains, “Snohomish attempts to minimize the total societal cost of reliability by balancing our cost to improve reliability and our customers' cost of service interruptions. For us, this corresponds to a system-wide SAIDI of 80 minutes per year.” As a planner of more than 30 years, Fletcher recognizes that value-based planning can lead to poor reliability for areas with a low population density and/or a high cost to serve. To address this problem, he explains, “To protect smaller areas from being neglected, we do not let any substation SAIDI exceed 120 minutes, any feeder SAIDI exceed 240 minutes or any customer SAIDI exceed 480 minutes.”
Generally, though, value-based targets are too high. They are based on cost surveys that typically overestimate customer willingness to pay, as well as on averages that do address the specific needs of specific customers. To minimize societal cost, distribution systems must be capable of providing different levels of reliability to customers with different reliability needs.
Private utilities, which serve a majority of U.S. customers, are more interested in maximizing profits than societal welfare. Historically, return on assets has been guaranteed, encouraging utilities to adopt conservative and expensive design standards, and to aggressively tackle reliability problems knowing that these costs can be recovered. In other words, reliability targets, though implicit, historically have been too high.
Deregulation changes everything. Since the National Electric Policy Act was passed in 1992, virtually every major utility has undergone massive downsizing and has drastically reduced spending. Utilities are reducing costs by deferring capital projects, loading equipment to higher levels, reducing in-house expertise and increasing maintenance intervals. As a direct consequence, the reliability on these systems is starting to deteriorate.
Many state utility commissions do not view lower reliability as an option and are turning to performance-based rates (PBRs) to keep reliability targets high. In their most general form, PBRs are regulatory statutes that penalize utilities for poor reliability and reward them for good reliability.
|Uninterruptible Power Supply (UPS)||0.3 kW||15 min||$50|
|Gas Generator||5.0 kW||n/a||$700|
|Manual Transfer Switch||500 kW||n/a||$8000|
|Automatic Transfer Switch||500 kW||n/a||$10,000|
|Diesel Generator||500 kW||n/a||$100,000|
|Ultra Capacitor||500 kW||5 sec||$175,000|
|Lead Acid Battery UPS||500 kW||30 sec||$200,000|
|Superconducting Magnetic Energy Storage||500 kW||3 sec||$300,000|
|Static Transfer Switch||10,000 kW||n/a||$500,000|
When subject to a PBR, a utility will attempt to minimize the sum of reliability costs and PBR costs. Reliability is increased if one dollar in improvements saves more than one dollar in penalties. Conversely, reliability is decreased if one dollar in reduced spending results in less than one dollar in additional penalties. Like value-based planning, PBRs will encourage utilities to make decisions based on average system reliability, which delivers inappropriate reliability to most customers. Unlike value-based planning, most existing PBRs set reliability targets that are too low. Because the penalties are so small, they will virtually never encourage reliability improvement (in these cases, other factors such as political pressure will tend to drive reliability targets).
Several other major drawbacks to PBRs exist. First, they subject utilities to financial risk. Because reliability will vary naturally from year to year, penalties and rewards also will vary. This reduces the ability of investors to forecast cash flows and will have a tendency to depress market capitalization. Second, PBRs can encourage short-term behavior with long-term consequences. For example, a utility may be able to meet or exceed the reliability targets for a period of time while still significantly reducing its spending in the area of reliability. If its system is in overall good condition, it may be several years before the effects of the reduced spending begin to reflect in the reliability indices. However, once the system reliability begins to degrade, “catching up” is no simple short-term task, and the utility may find itself incurring penalties for years to come.
The Reliability Marketplace
In a free market, the laws of supply and demand would dictate reliability targets. If reliability demand exceeded reliability supply, the shortage of reliability would drive up price and give utilities the incentive to produce more reliability. In reality, distribution systems are a natural monopoly and free market behavior does not generally apply — or does it? While a utility may define reliability based on service interruptions, customers define reliability based on the ability of their electrical equipment to function when needed, and may look to the marketplace for their reliability needs.
There is a thriving free market for equipment reliability. Although customers may not have a choice regarding their service reliability, there are vast product lines designed to increase equipment availability (Table 2). Anyone wishing to provide higher reliability for his or her computer can purchase a 300-W uninterruptible power supply at a local electronics store for US$50. Anyone wishing to immunize his or her house against extended interruptions can purchase a 5000-W gasoline generator for US$700 at a local home improvement store. These are the products utilities are competing against in the reliability arena, and the market for these products is presently the best measure of customer demand.
While many industrial and commercial customers are purchasing reliability equipment, most residential customers are not. From this, one can infer that reliability targets are too high for most but too low for many. In any event, to adequately address the needs of different customer groups, utilities will need to plan and design distribution systems based on differentiated reliability targets for customers with different reliability requirements.
One approach to customer choice is differentiated service. Under this paradigm, utilities offer customers a menu of service connection options with various reliability and price attributes. An example is primary selective service and spot network service. Customers willing to pay for higher reliability are supplied with multiple feeders so that service is maintained after one or more feeders become de-energized. Figure 2 shows typical service connections options.
It is not cost effective for utilities to route multiple feeders near most of their customers. Spot networks and selective services are limited to high-density areas and are not available to residential or small commercial customers. Reliability for these customers can be improved with primary loops and feeder automation, but full customer choice becomes difficult because all customers on a switchable feeder section will experience essentially the same reliability.
It is possible to provide small customers choice by equipping distribution transformers with power-quality devices (Fig. 3). Customers selecting basic service are connected to the left bus and experience an interruption whenever the primary feeder is de-energized. Customers selecting advanced service are connected to the center bus. An energy-storage device is able to supply these customers for interruptions less than one minute, eliminating momentary interruptions and voltage sags. Customers selecting premium service are connected to the right bus. A distributed generator is able to start within one minute and provide these customers with uninterruptible power. This type of technology allows customers to choose their own level of reliability. Unfortunately, the vast majority is not willing to pay for higher reliability because current reliability targets are generally too high. Differentiated service may set reliability targets in the future, but not until the cost of power-quality devices go down and/or the demand for reliability goes up.
Reliability guarantees are the most simple method allowing customer choice to set reliability targets. Each customer is allowed to choose a reliability plan; expensive plans guarantee high reliability, basic plans guarantee modest reliability and the cheapest plans do not provide guarantees. Customers experiencing reliability below guaranteed levels receive rebate checks or credits on their energy bill.
Reliability guarantees allow neighboring customers to select different plans. If low reliability is experienced in an area where many customers are signed up for high reliability, rebate costs will be high, and the utility will spend money to improve reliability. If high reliability is experienced in an area where many customers are signed up for low reliability, the utility will reduce spending in this area.
On the surface, reliability guarantees seem to create a near free-market with customer choice sending price signals to utilities. Many utilities, in fact, are beginning to offer reliability guarantees to major accounts. Widespread implementation to all retail customers is problematic because of the free-rider effect. Because customers are connected to the same wires as their neighbors, many will sign up for inexpensive reliability in hopes that they benefit from their neighbors signing up for expensive reliability. Most consumers will resent paying for their neighbor's reliability and will choose less-expensive plans. The free-rider effect, therefore, leads to reliability targets lower than efficient market levels.
Setting appropriate distribution-reliability targets is a complex process with technical, economic and political implications. In the past, regulated utilities provided reliability implicitly through design and maintenance standards, which led to what many refer to as gold-plated systems. These practices were not based on reliability targets, and benchmark targets based on them will generally be too high. Alternative methods of setting reliability targets such as PBRs and reliability guarantees may lead to reliability targets that are too low.
At present, a silver bullet to kill reliability targets that are too low or too high does not exist. Benchmarking can be like comparing apples to oranges. Value-based targets are based on reliability measures that may not be strongly correlated to customer satisfaction. PBRs tend to use poor metrics and subject a utility to financial risk. Differentiated service allows for customer choice, but is not practicable for most customers because existing reliability is already too high. Performance guarantees allow for customer choice but suffer from free rider problems. There are no easy answers, but moving towards customer choice and differentiated reliability is the best way to ensure that reliability targets are neither too low nor too high.
Richard E. Brown received the Ph.D. in electrical engineering from the University of Washington in 1996. He is currently the director of Consulting IT for ABB Consulting and specializes in distribution systems, reliability assessment, design optimization and computer applications to power system analysis. He is a senior member of IEEE and a registered professional engineer.
Michael W. Marshall received the BSEE degree from the University of Missouri-Rolla in 1982. He is presently a principal consultant for ABB Consulting and has more than 20 years of experience in power generation, transmission, distribution and reliability assessment. He is a member of IEEE and a registered professional engineer.