A recurring theme that has been set in motion by the gradual deregulation of the electric utility industry is the importance of system reliability. Interviews with utility executives around the United States have produced almost identical responses when the subject is discussed. Richard Sonstelie, chairman and CEO of Puget Sound Energy, observed that the key to the success of Puget's gas and electric operations would be the ability to provide reliable delivery. "More than ever in our history," said Sonstelie, "reliability of service is going to be critical to our ability to succeed in this business. The corollary to that realization is that maintenance is high on our agenda."

In contrast to assurances from utility executives regarding their commitment to ensuring reliable electric service, Jim Burke of ABB described another view in his paper for the Reliability in a Deregulated Market International Conference and Summit Meeting that was organized by T&D World magazine in September 1998. Burke's observation was that distribution reliability was fast becoming a victim of budget reductions. He expected that these reductions would severely affect residential customers since utilities did not consider them to be as profitable as industrial and commercial customers. "As overall system reliability is reduced," said Burke, "utilities will find themselves in the unenviable position of facing increased demands for reliability from their largest customers who have sensitive equipment." He postulated that utilities would be forced to devise performance-based rates and policies to satisfy these customers. That is, those who wish to, may purchase a higher level of reliability by paying higher rates.

Defining Reliability In the lexicon of the power-delivery engineer, reliability and continuity of service have been synonymous. In the past several years, the concept of reliability has expanded to focus on power quality as a distinct characteristic. Power quality and continuity of service are now considered to be the two elements that define system reliability. Within this definition, power quality implies an unvarying stability of voltage and frequency in power delivery. Continuity of service refers to the dependability of the infrastructure, which implies that lines and structures are stout enough to withstand the rigors of natural forces. Power quality, therefore, is considered to be a function of the operations of interactive equipment in the form of intelligent electronic devices (IED), while continuity of service is viewed as a function of static utility plant in the form of towers, poles and conductors. The rapid development and use of IEDs has demonstrated their usefulness in maintaining high levels of power quality, where momentary voltage dips can interrupt manufacturing processes that depend on electronic controls. Computer operations of all kinds are subject to misoperation when power quality is not maintained. Indeed, the advent of IEDs may hold the answer to Burke's dire predictions, since these devices have the ability to monitor and report on wave shape, frequency, harmonic distortion and variations in voltage.

As an example, when it comes to the voltage-sag problem, an advanced detector is installed in a by-pass mode in the circuit. At the moment of disturbance, when voltage begins to sag, the detector transfers to a sag-correct mode in less than half a cycle. From this point on, the device delivers regulated sinusoidal output to the load. When normal voltage returns, the load transfers to the utility line supply. This device provides much the same correction as is supplied by superconducting magnetic energy storage systems, which have been applied to large factory loads.

Smart Systems By predicting the probability of trouble before it happens, IEDs have become an important part of the maintenance system. This ability to pinpoint the need for maintenance has become an integral part of reliability centered maintenance, where on-time maintenance is replacing routine scheduled maintenance.

The increasing pace of activity involving leading-edge technology has produced smart systems that provide remote access of data, self-diagnostics of control equipment and integration of functions among relays, meters, remote terminal units andload controllers. Bypassing the human link in dispatch centers, the smart devices can communicate with each other to perform system operations within pre-set limits that achieve high power quality within the overall framework of reliability. It appears that system reliability, with respect to power quality, is achievable at a level beyond expectations of a few years ago.

Evaluating Performance To properly evaluate system reliability, statistical indices have been developed to quantify performance. In a paper at the T&D World-sponsored Reliability Conference and Summit Meeting, Mark F. McGranaghan of Electrotek Concepts Inc. discussed the necessity for calculating statistics that characterize power quality levels. The most basic index for voltage sag is described by SARFI(X), which is the System Average RMS Frequency Index for a voltage X. This index represents the average number of specified short duration RMS variation measurement events that occurred over the monitoring period per customer served. The specified disturbances are those variations with a voltage magnitude less than X for voltage dips or a magnitude greater than X for a voltage swell. It should be noted that the index calculated by SARFI(X) is a different statistic than SAIFI, which is the System Average Interruption Frequency Index.

Monitoring systems must be installed to track performance and collect data that can be used to calculate the required indices of quality. While many utilities have already installed monitoring systems to check on system performance on a continuous basis, some have installed these systems at the customer's premises for those customers who have contracted for premium service. These are the customers who require high levels of power quality and who are paying higher rates than those customers who can tolerate occasional voltage dips or even interruptions. Utilities in this category include Detroit Edison, Consumers Power and First Energy in Michigan.

The Infrastructure Component In addition to the issue of maintaining power quality is the issue of continuity of service, which has been an historical hallmark of power delivery. Special attention has always been paid to interruptions due to line faults caused by contact of transmission lines with trees growing in the right-of-way or by storms that have toppled structures. While system operations have been improving dramatically due to the advent of interactive devices installed at both customer locations and the utility's substations, the dependability of the static elements involving towers, poles and conductors is largely beyond the control of the utility.

Whether we believe that the burgeoning growth in the world's population is the major cause of climatic changes characterized by global warming and El Nino storms, recent natural disasters are taking a heavy toll on electric power systems. A sampling of events over just the past couple of years discloses ice storms in eastern Washington state, the northern tier of New England and southeastern Canada; mud slides and firestorms in California; flooding in Texas and North Dakota; and hurricanes in Florida, Louisiana and Central America. All of these events caused damage to the overhead plant of local utilities creating outages that lasted for weeks. In New Zealand, unseasonably hot weather created such massive demand for power that cables supplying Aukland cascaded in failure, plunging the city into darkness for several weeks. In terms of reliability, these events are in a special class because utilities are powerless to render their systems immune to the forces of Nature.

A typical example of how a storm can devastate a whole area is the sequence of events that occurred in Spokane, Washington in the early morning of Nov. 19, 1996. With temperatures of 27 to 33oF, a steady rain produced ice on the trees, utility wires, cars and fences. The streets and sidewalks remained clear because of retained heat in the ground. Ice sheaths, ranging from 1/4 to 1 1/4 inches, added as much as five times the normal weight to tree limbs, whichbroke and fell on roofs and overhead lines. The storm lasted for eight days, during which time 2 to 5 inches of snow also fell. Every part of the Washington Water Power delivery system experienced significant damage.

In January 1998, a five-day ice storm in Canada deposited 4 inches of ice on electric lines. During that same storm 200 transmission structures and 8000 poles were lost in the Niagara Mohawk service area in northern New York state. Niagara Mohawk had to replace 2000 transformers as a result of this storm. At the same time, rain in Vermont, New Hampshire and Maine fell while temperatures hovered around freezing, causing thousands of trees and poles to break-more than in every other storm in the past 30 years combined.

With power interruptions of these magnitudes, utilities must devise a strategy to expedite resumption of service. Typically, utilities have relied on emergency crews from other utilities to help rebuild damaged lines and to restore power, but in the future crews from other utilities may not be available because of downsizing. A new approach to this problem involves partnering with contractors and suppliers. These alliances could go a long way to minimizing outage time while keeping costs low.

An even more innovative approach is to invoke an emergency and disaster preparedness plan that is instituted in plenty of time to be ready for the next emergency.

The Transmission System Dejan Sobajic, manager for grid operations and planning at Electric Power Research Institute, has observed that "marketplace competition and the growing demand for transmission services" has put grid operators in the position of wanting to know just how much load their lines can handle. Since conductor load is limited to that level which does not permit the conductors to achieve a high temperature that would result in excessive sag and impaired clearances, traditional ratings have been based on conservative values for ambient air temperatures and crosswind air speed. It has been acknowledged for many years that the concurrence of high ambient temperatures and low wind speeds is rare. This fact has encouraged system operators to take calculated risks in loading their lines to a higher level than in the past.

Instead of increasing line ratings based on hunches and educated guesses, utilities are looking at the new technologies to provide greater transmission capacity. Bonneville Power Administration (BPA), which operates 80% of the high-voltage transmission in the Northwest United States, provides an example of how engineering can solve the problem. Sharon Blair of BPA described how the agency has used "innovative technologies and techniques to operate and maintain the system." Investment in fiber-optic communications grew from zero in 1992 to nearly US$20 million in 1998. Together with computer-controlled devices, the fiber-optic links are capable of monitoring and managing the system. In addition, engineers designed equipment to not only regulate voltage but to boost existing capacity rather than building new lines and substations. Static VAR compensators and series and shunt capacitor banks have been installed to accomplish these goals. "To monitor and correct outages," said Blair, "automatic controls were installed to stabilize the system following a disturbance." The dc intertie to California is an example of how capacity can be increased using engineering design. When it was completed in 1970, the line carried 1400 MW. Today, with state-of-the-art equipment added at the terminals, the line can carry 3100 MW.

A technique that has been in use for several years, but which deserves greater attention than it has received, involves the use of load cells at conductor suspension points to provide data for determining real-time ratings. The load cell measures the line tension and-together with a special sensor that measures ambient air temperature, wind speed and solar radiation- provides data that are downloaded to the energy management system. A computer program then calculates the permissible ampere loading on the conductor for the existing real-time conditions. The loading value is displayed for the benefit of the operator, who can then adjust load flow to correspond to actual line conditions without resorting to the traditional conservative estimates for line ratings.

The Distribution System Since no part of the electric system is more prone to storm damage than is the distribution system with its proliferation of primary and secondary circuits, utilities pay special attention to protecting these lines from storm damage. After a hurricane with 119-mph winds tore through Western Oregon downing trees and leaving more than 400,000 people without power, attendees at a conference on vegetation management brainstormed about solutions to the downed line problem. Diane Cowan, executive director for the Oregon People's Utility District Association (OPUDA), pointed out that, despite the expenditure on the part of individual utilities of close to US$1 million annually for tree trimming, "there seemed to be a lack of understanding that these utilities ...are up against incredible odds." To address the problem, the Federal Emergency Management Administration (FEMA) approved a hazard-mitigation grant to OPUDA to identify existing resources and best practices for protecting rights-of-way. One of the "best practices" recommended by West Oregon Electric, for example, would be to replace cross arms with a "trim line" construction that clusters lines on a steel pin insulator.

In a similar vein, the Electric Power Board of Chattanooga has enlarged its tree-trimming program by using "lateral" or "natural" tree trimming where limbs are removed at their nearest main branch or close to the trunk of the tree. Trees along the edge of the right-of-way that are severely leaning, dead or decayed are removed. The utility pays special attention to line clearances to ensure that trees do not grow back into the lines before crews can trim them again in the next tree-trimming cycle.

The Future On-line statistical controls have long been used by industrial plants to assess quality trends of manufactured products. In the same way, electric utilities can devise statistically based reliability standards to simplify planning and to achieve high levels of system reliability. In this context, system design, the use of IEDs and power conditioning equipment can provide the high-tech tools required to improve power quality. Additionally, continuity of service can be enhanced by paying close attention to line loading limits in the transmission system and by using on-time maintenance principles to ensure that the distribution systems remain secure.

Demonstrating the Value of Real-Time Ratings In a monograph by Tapani O. Seppa, president of the Valley Group, Inc., of Ridgefield, Connecticut, U.S., Seppa warns of the danger of unnecessarily implementing contingency actions. He notes that transfer limits are based on first contingency transfer capabilities. These contingencies may be the result of voltage, stability and thermal limitations. While voltage collapse or a loss in stability can cause an outage independent of weather conditions, low winds and high ambient temperatures that coincide with high ampere loads may result in conductor temperatures that exceed permissible limits. Prior to the advent of real-time thermal monitoring systems, operators had to make decisions based on the assumed worst case cooling conditions of line. These assumptions were based on limitations that occurred only rarely. Since this is the usual case, operator actions during contingencies can aggravate the network conditions.

As an example, in March 1996, operators of an unspecified group of utilities noticed that an unscheduled maintenance activity on a 230-kV line caused another line to load up to 120% of its established thermal limit. Because generation rescheduling did not reduce the load sufficiently, substation buses in three areas were sectionalized, resulting in an overload of another 230-kV line, which sagged into the underbuild and tripped out. At this time, the original line-now loaded up to 160% of its static rating-sagged into a tree causing a cascading outage of seven other circuits. The outage affected several hundred thousand customers for several hours.

As shown by data from a CAT-1 Transmission Line Monitoring System, which had been installed on the line for engineering trials, the line originally had never approached its thermal limit because of the environmental conditions of relatively high wind and low ambient temperature. Thus, remedial action was unnecessary. By the time the line was overloaded as a result of the sectionalizing that had taken place, its conductor temperatures had risen to a level significantly higher than its emergency rating causing its contact with the tree in its R/W. Thus, with respect to system reliability, real-time capability monitoring allows operators to avoid unnecessary contingency actions, some of which could be counterproductive.

It Is Never Too Late or Too Early to Prepare for Disaster "At least 60% to 70% of employees don't believe that a disaster can happen to them. The prevailing attitude is that 'if we don't talk about it, it won't happen.'" said Laura Kaplan, formerly an engineer at Florida Power & Light Co. and now president of LGK Associates, Inc., Sunrise, Florida, U.S. LGK Associates specializes in disaster preparedness. Kaplan's advice is to create a disaster plan that encompasses several requirements:

- Establish a team with a team leader who is authorized to make decisions based on a buy-in from top management. - Maintain contact with other companies who have survived a disaster to determine their experiences with what went right and what went wrong during their recovery period. - Set up contingency contracts with vendors for restocking key inventory in case of emergency. If inventory is not available, the vendor should be able to acquire materials from other suppliers. Backup plans are important to ensure that replacement materials will be available. - Build in flexibility to avoid limiting decisions by key coordinators.

While the plan should outline all details of activity during the event including emergency phone numbers, response equipment, employee assignments, and headquarter and staging area layouts, crews should be able to make on-the-spot decisions without getting approval from a higher authority.

As important as it is to establish a plan, corporate commitment is a must: - Allocate money to pay for equipment, training, disaster drills and consultants. Staff members from each department must also be allocated. - Train and educate key coordinators, who will, in turn, be responsible for training members of the individual unit. - Run disaster drills to provide an indication of what will work during the actual event. These drills should be held at least twice a year. - Employ an outside observer from a consulting firm or from a company that has been through its own disaster, to check on the plan's viability.

The one thing that Kaplan stresses is that information is vital for all participants. Especially with respect to shared information among companies, Kaplan stresses that insider information can help a company avoid mistakes already made by others.