In a Dec. 2 article in the Los Angeles Times entitled “Power Struggle,” Evan Halper thoughtful and correctly highlighted many of the challenges of integrating intermittent power sources on the grid. However, a theme of the article was that the United States has "a grid designed for the previous century," which is a "massive patchwork of wires, substations and algorithms." Halper’s prose is almost poetic, but it furthers a common misconception that just because something as large and venerable as a continent-spanning power grid is complex, it is necessarily unstable and fragile. In fact, judged by the 99.98% reliability that power is delivered to U.S. citizens from coast to coast, the grid is actually a marvel of reliability.
Something as large and venerable as a national power grid comprises millions of long-lived components spanning generations of evolving technology. At every stage of its past and future life, newer and older technology work together in a state of perpetual recapitalization. It is not a realistic proposition to bring the entire system “up to date” all at once any more than to tear up all the nation’s roads and repave them all at once. As long as technology innovation keeps marching on, there is always going to be a mix of technologies incorporated into something of such a scale that it can only be recapitalized over decades. The question is how skillfully and wisely is recapitalization being performed. Rick Bush of Transmission & Distribution World magazine wrote a carefully researched piece on "Maximizing Our Aging T&D Assets" on Nov. 18. In it he cites a Brattle Group study for the Edison Foundation that predicts a need to spend up to $2 trillion on the US grid before 2030. Both of these articles by Halper and Bush motivated me to do some further research.
According to the Rocky Mountain Institute, the U.S. electric grid currently has about $2 trillion in present value, of which $900B is generation, $100 billion is transmission, and $1 trillion distribution. According to ASCE, annual U.S. capital investment across the first decade of this millennium averaged $35 billion for generation, $8 billion for transmission, and $20 billion for distribution. With a little arithmetic (and ignoring the fraction for growth), we see that we have been recapitalizing generation on a 25-year pace, transmission on a 13-year pace, and distribution on a 50-year pace. If the spending had been spread evenly, it would correspond to a 24-year pace. While transmission has been the sexy place to sink money of late, particularly in support of adding new "renewable" generation capacity, distribution has been relatively neglected.
In military logistics, distribution corresponds to what is known as "the last tactical mile," which is where the break in the supply chain most often occurs. Fleets of sealift ships and airlift aircraft reliably get supplies to the ports, and fleets of trucks get materiel to inland depots, but the plan from that point forward is generally a bit fuzzy. There are no large platforms or procurement projects to address this perennial distribution challenge, and no unified constituencies to lobby on behalf of the end users. The guys in the depots stereotypically live like kings "in the rear with the gear" while those on the front lines beg for fuel, ammo, and food.
Likewise, transmission has been the beneficiary of much largesse in recent years while distribution has languished. Costly efforts like the $6.9 billion Competitive Renewable Energy Zone (CREZ) in Texas (3,600 miles of line @ $1.9 million/mile) and the $1.9 billion Sunrise Powerlink in California (117 miles @ $16 million/mile) have been possible only because of federal and state subsidies, tax breaks, and public service commission-guaranteed ROI for the utilities to be collected from rate payers as premiums on their bills. The bulk of this huge CAPEX spending has been to support adding more commercial-scale wind and solar power to the grid. These resources tend to be sited far from load centers, away from existing lines, and in low power-density configurations spread across thousands of acres. It is economically troubling that most of this spending represents pure capacity growth instead of recapitalization because existing generation and transmission must still remain in place to carry the power that backs-up and buffers the new intermittent capacity. Even more troubling is that the capacity factor for much of this new transmission will necessarily mirror the low national capacity factors of commercial-scale wind (0.31) and solar (0.19).
Of the $20 billion that was spent on distribution in the past decade, $4.9 billion was federal stimulus grant money for “smart grid” demonstration and implementation under the American Recovery and Reinvestment Act (ARRA). The value of the supervisory control and data acquisition (SCADA) portions of smart grid technology have been long-established, and electric utility operators chose to use some of these funds to buy new smart components like phasor measurement units and automated feeder switches. But the lion’s share of ARRA funds have gone to customer systems, specifically advanced metering infrastructure (AMI). The “shovel-ready” urgency of ARRA spending resulted in a flurry of deployments of AMI before the technologies were mature and before the adoption of standards for interoperability. $4.1 billion in ARRA money was spent installing 15 million smart meters (@ $270 per meter) representing just under 11% national coverage. With other funding, the utilities have pressed forward to achieve more than 50% coverage by the end of 2013. Yet, many utilities still have partial implementations of AMI and are struggling to overcome barriers such as differing communications and data protocols, unreliable connectivity, and customer wariness. Recent Black & Veatch and Tantalus surveys indicate that many in the industry are having difficulty communicating to their customers and themselves the specific benefits of AMI and are still trying to exploit the technology into a good business case. The consumer perspective is also problematic.
First and foremost, it is hard to convince customers that something is cost-effective and for their benefit when the model for implementing it is to fine them $10-$40 a month for non-adoption. The customers were paying for manual meter reading and other inherent analog inefficiencies with their old rates, so logic would dictate that if the new digital technology actually does deliver cost savings, these should be passed on to the customers in the form of discounts on their rates or incentives for adoption rather than additional charges for refusing. It is also worth considering that one person’s “smarter” is another person’s “invasive.” AMI can be perceived as another privacy-stomping scheme where the power company gets to learn intimate details of all its customer’s behavior and aggregate that retail data together for their own purposes, while the customers only get their own data back, and that data filtered and packaged as the utility sees fit. Consider how a utility would feel if their customers were reciprocally allowed to see their real-time reserve margins, fuel switching decisions, wholesale power purchases, and reliability and economic performance metrics compared to their competitors. What if residential customers were able to watch in real-time how close the generation margins were during the recent cold spikes and see that much of the generation capacity keeping the heat and lights on through the 6 am morning peaks when renewables were yet to wake up were coal plants scheduled to be retired in coming months due to new EPA regulations? Such information would certainly be empowering to customers by making them smarter consumers and helping them to raise the right issues with their utility regulators and elected representatives. Customers may soon be able to reconstruct such data without their utility’s cooperation by collaboratively sharing their own smart meter data with each other or with a third party like Genscape that specializes in collecting and synthesizing may inputs into real-time energy intelligence. The intimacy and transparency of the smart grid may well become more mutual in the near future.
For now, local power companies handling distribution are at the front lines of interacting with Americans in the intimate environment of their homes and offices. Some benefits of AMI clearly serve the interests of both consumer and provider (e.g., improved outage detection, remote meter reading, reduced truck rolls, improved customer service, pre-paid metering options), but others can appear intimidating rather than liberating to the consumer (e.g., remote disconnection/reconnection, consumer load profiling, consumer load shaping with TOU rates, etc.). Making AMI a good business case for both the utility and the customer remains as much a social problem as a technical one. In my opinion, the jury is still out on whether the billions spent on smart meters would have been better spent recapitalizing meat and potatoes components like breakers and regulators and transformers that have also gotten smarter and contribute more directly to grid performance and resilience.
So, if recapitalization funds have not been optimally spent in recent years, where does that leave the grid at this moment? The ultimate criteria for judging something is performance. The US grid currently comprises 9,200 generating units and 300,000 miles of transmission lines serving an average demand of 500 GW and peak summer loads of 800 GW. The average American consumes electricity at an annualized rate of 1,400 watts (500 watts to their home and 900 watts to their workplace and leisure activities) on a 24-7-365 basis, rain or shine, summer or winter, whether living in the sparsely populated southern deserts or concentrated in the high-rises of Manhattan. American power consumers furthermore feel entitled to run their space heaters and air conditioners and clothes dryers and water heaters and electric car chargers all at the same time, regardless of if their neighbors are doing the same, and thus creating huge peak loads during certain hours of the day. To top it off, they expect and receive this power at 1/3 the cost that much of Europe pays for each kilowatt-hour. Rather than being fragile or blackout-prone, today’s grid meets all these demands with the previously mentioned 99.98% reliability that corresponds to a single outage of less than 1.5 hours per year -- and such outages are predominantly weather-related rather than system faults. This is not the performance of a system “designed for the last century.”
Today's U.S. power grid has evolved along a sensible and economic path in tandem with the needs of the nation to reliably and economically serve a 21st century post-industrial population of consumers with very high expectations. The optimal path to meet the twin criteria of both extreme reliability and universal coverage has been to build large, ever more efficient and clean power plants, generally sited close to load centers, with dispatchable output to follow fluctuating demand, and with many days of on-site fuel storage to enable continuous operations during periods of severe infrastructure disruption. From the very beginning, multiple generators and multiple loads have been synchronously linked together for improved stability. In the 1930s it was recognized that the grid was the most efficient way to leverage the inherent unbeatable economies of scale and concentration of large power plants out to the low-density customer base of rural America that would otherwise not be served except for the rich who could provide their own generators. That logic and economics still apply today. These local grids have naturally grown and fused into the 3 huge synchronous grids that today embrace nearly all of the USA and Canada. Through these three regional grids and their nine asynchronous interconnectioins, power flows from peaks of over-generation to valleys of unmet demand to balance out instabilities in minutes, seconds, and even fractions of a second. Yet, suddenly there are many people fomenting panic about this marvel of engineering and claiming it will imminently collapse of obsolescence and age without the investment of trillions of dollars to bring it up to date.
Rather than witnessing the decaying performance of an aging grid, a better characterization of what is happening today is that a reasonably effective and efficient bulk power system is being forced to give up much of its lowest-cost and optimally-located coal generation while simultaneously being forced to accommodate a new class of power generators that produce less valuable and less compatible power. Both of these phenomena have channeled investment away from true recapitalization and into spending on new generation and transmission that are largely duplicative of existing capacity, yet cannot replace existing capacity.
The outputs of solar panels/module and wind turbines vary uncontrollably by season, by hour, and by second up to 100% of their output range. Each of these individual generators produce non-load-following, low capacity factor, asynchronous power diffused across a large geographic area that must be collected and conditioned. These energy resources tend to be located far from large power customers and existing grid infrastructure, requiring new transmission lines. In most states the grid operators are forced to take all the power wind and solar produce regardless of concurrent demand under policies that undercut any pretense of economic dispatch, and at prices grossly distorted by subsidies that exceed 1.8 cent/kWh for wind and 3.0 cent/kWh for solar in federal money, in addition to state assistance. The lifetime cost of electricity (LCOE) for solar and wind are forecast to remain higher than conventional alternatives, and this is without consideration of the energy and monetary costs of curbing their intermittency with sufficient storage to achieve minimum standards of grid-compatibility. In practice, it is found that these intermittent generators cannot replace conventional generators, but instead actually require additional thermal generation capacity be added to the grid, and this new thermal capacity must also have the characteristic of very rapid ramp rates.
The irony is that it is the newest infrastructure -- renewable generation -- that is the most compromising of grid stability and reliability and which is imposing costs that siphon recapitalization funds away from the greatest need. There is a lot of creative "Enron accounting" going on to try to hide the full cost of renewables, principally by socializing them to the rest of the grid. But the effects are inescapable from a macro view:
- It cannot be ignored that we are building new thermal generating units and adding net capacity when overall load has dropped from 2008 peaks and is nationally not forecast to return to those levels for years;
- The location and low capacity factor of new transmission lines reveal they are being added largely to solve congestion created by these new “renewable” generators and by retiring other plants before the end of their service lives for regulatory reasons;
- The GTOs and ISOs responsible for load-interconnect-energy balancing cannot miss the fact that they are running more spinning reserve for regulation and disturbance recovery.
- All of the above argue that we are not spending our precious recapitalization dollars where they buy us the most economic utility.
So what is a better way forward? First we need to properly count and allocate all costs to their true sources. LCOE for any generator should include the grid integration costs/burdens necessary for it to achieve a minimum threshold level of dispatchability so as not to drag down the stability of the grid as a whole. Dispatchability and intermittency are opposite sides of the same coin, and an increase in one demands an increase in the other to maintain the same degree of grid resilience. One way to calculate the cost of intermittence is to cost the necessary storage needed to compensate for it. When that is done even using the most cost-effective storage option today – pumped-hydroelectric – wind and PV solar are revealed to be far from competitive economically, or in energy return on investment (EROI).
Secondly, we need to move away from a pure cost model to also embrace the concept of value (aka “utility”). For example, a consumer will pay more for energy that is reliably available 24-7-365, for power that robustly matches the peaks and valleys of their demand, and for consistent quality in voltage, frequency, and waveform. Conversely, power is less valuable if it is intermittently available, is delivered with non-coincident peaks and valleys, and varies in other metrics of quality. The latter case requires customers to make accommodation with demand-response, time-shifting, or interruptibility, as well as to maintain local assets for power generation and conditioning if necessary. Similarly, outgoing power placed on the grid at a residential meter does not have the same value as incoming power delivered to a residential meter; this is the difference between wholesale and retail – the cost burden of collecting power from many diffuse sources on an intermittent, must-take basis and delivering it to a different diffuse set of customers on an on-demand basis. All kilowatt-hours are not fungible. The lower inherent value of power from intermittent generation should command a lower price (or at least it would in a free market).
Pricing that recovers costs without considering the value of the product considers only the interests of the producer and neglects the interests of the consumer. This one-sided practice is unfortunately the status quo and can be seen in how rates are currently designed by the power producers and deliverers and negotiated with public utilities commissions on the basis of cost-recovery, rather than being designed by the consumers and justified on the basis of value of energy service received. This pattern of considering only cost and ignoring value, combined with market-distorting subsidies and policies that count all kWh as perfectly fungible regardless of their quality or delivery timeliness, has done damage to grid efficiency and is predictably and measurably leading the industry to provide lower-value power at higher prices.
The electric power industry needs to stand up and make the business case for sound recapitalization spending to regulators and policy-makers. The hard data show that energy-efficiency and EROI (the energy-efficiency of energy production) are the principle metrics linked to primary costs (capital, fuel, O&M, G&A) as well as secondary costs (natural resources, environment, climate, health, safety). The poor EROI and dismal power density of wind and solar terribly undercut their claim of full lifecycle savings in either primary or secondary costs. The devices that capture wind and solar energy each begin operation with a huge debt of embedded energy and pollution and GHG emissions for all their steel and glass and concrete and rare-earths; and this debt is difficult to pay back because of their very low lifetime power production. Such regulatory measures as carbon taxes and other "social costs," if logically and equitably applied to the full cap-to-recap lifecycles of competing generation options instead of just the operational phase, and if normalized to the amount and quality of energy delivered, yield results contrary to current political tides of opinion. The sun and wind are free, but the devices that capture their energy are expensive and are themselves not truly renewable because their construction depends upon high-power density fossil fuels for the necessary mining, manufacturing, and emplacing of components in a cycle that must be repeated every 20 or 30 years.
The bottom line is that, if we are going to provide a grid that delivers high-value, 21st century power while protecting the planet we all love and depend upon, we need to operate with facts and not feelings, and do what actually works rather than what sounds good.
Todd "Ike" Kiefer is Contracts Administrator at East Mississippi Electric Power Assoc., [email protected].