Optimizing the electric grid requires integration with changing generation paradigms.
For more than 100 years, it has been an axiom of the power industry that bigger generating plants produce cheaper power, as measured in dollars per kilowatt capacity. In fact, the economies-of-scale principle has been a driver of most of the power industry’s technical development.
Utility engineers and planners of several decades ago would be overcome with shock and awe had they had a glimpse of what was coming. Rooftop solar, backyard wind turbines, even small modular nuclear reactors (SMRs)... sure, some had visions of a distributed utility paradigm but not to this extent.
The very idea that utilities would be encouraging private ownership of generation attached to the utility-owned grid would make some utility retirees spin in their lounge chairs.
Ironically, this vision of the future would probably have delighted the guy who founded the first electric power system. Thomas Edison’s low-voltage DC system could only distribute power within about a mile radius from the generator, so he built many small generation units.
Today we call that distributed generation and think of it as a modern concept!
But this paradigm of small, close-to-the-customer generation units disappeared for about a century. Technical ingenuity, growing electrical demand, and economics soon pointed to the direction that the industry was to follow for almost a century.
Maybe it all started with Charles Parsons of England.
The first electric generators, developed in the late 1800s, were powered by reciprocating steam engines, mechanically similar to today’s reciprocating combustion engine. The pistons, powered by expanding steam, turned a crankshaft which produced rotary motion, smoothed out by a huge flywheel, which eventually turned the electric generator. The horsepower, and thus the available electrical power, was roughly proportional to the physical size of the engine. That means if you double the size, you double the power. But you also double the noise and more than double the weight and manufacturing cost.
Then, in 1884, Parsons invented the steam turbine. Use any fuel or heat source to produce steam, let it blast through the turbine vanes and you produce direct rotary motion for the generator. This is a far simpler and more reliable means of converting thermal energy to kilowatt-hours compared to steam reciprocating engines.
The steam turbine has another characteristic that shaped the direction of twentieth-century electrification. Simple laws of physics dictate that the power output capability of a turbine is roughly proportional to the square of its size – double the size and you quadruple the output capacity.
In the U.S., the Chicago Edison Company started the big power plant trend with a 5 megawatt GE turbine generator in 1903, followed by several 12 megawatt turbines in 1911. Electricity became inexpensive, attracting more consumers which in turn provided capital to build even more and bigger generators.
As power plants got bigger and noisier, they needed to be located further from cities. This led to another industry milestone, the universal use of alternating current (AC) which could be transformed to higher voltages in order to pump electrons over long lines with high efficiency.
The first high-voltage transmission line was built in 1896 and had nothing to do with steam turbines. It was born out of the need to connect Buffalo, NY to hydro-electric generators at Niagara Falls, 20 miles away. Following the famous “War of the Currents” which pitted Nikola Tesla and George Westinghouse against Edison, the line was constructed and performed as predicted to everyone's (except maybe Edison's) relief.
Propelled by the Niagara success, alternating current rapidly replaced Thomas Edison’s original low voltage, direct current distribution system design. Now it really made sense to build big plants connected to big transmission lines. Power lines and towers became an acceptable part of the skyline.
The economics dictated by utility regulation was also a major encouragement to build big. The Public Utility Holding Company Act of 1935 (PUHCA) more-or-less guaranteed utilities a rate-of-return on invested capital. That lowered the risk to investors and allowed the industry to raise funding to provide nationwide electrification.
PUHCA virtually erased the risk and guaranteed the rewards of building large capital projects.
The economies of scale of large central generation reached its peak and leveled off in the mid 1900’s leaving the U.S. with about 130 coal plants over 1000 megawatt capacity and 29 plants over 2000 megawatts. Many of the 104 U.S. nuclear reactors have a capacity of at least 1000 megawatts. All of these behemoths attach to big transmission lines that march off to population centers.
In 1978 the Public Utility Regulatory Policies Act (or PURPA) required utilities to purchase non-utility generated power. Almost by definition, non-utility generators (NUGs) aren’t built as large as utility owned plants - too much financial risk. Since PURPA requires utilities to pay for the power on an avoided cost basis, this is usually a good deal for the NUGs who are sometimes referred to as “PURPA machines.”
Even the utilities got into the downsizing game with smaller utility owned distributed generation, particularly aero-derivative gas turbines, which became practical and economic for peaking.
All this set the scene to utilize renewable, distributed resources as they became available. And, boy, are they becoming available.
Rate-payer and tax funded incentives have masked some of the downsides of distributed renewable generation. However, I have colleagues who argue that being overly-optimistic is okay and we need to go to extremes so we can return to an economical and technically feasible balance. They're passionate in believing that we’ve gone “big” for way too long.
In some folks’ vision the large base loaded generation plants will eventually disappear to be replaced by millions of small renewable generation units, all optimally controlled and coordinated through the magic of Smart Grid, or Optimal Grid technologies.
Maybe, but I don’t think that will quite happen in my life time. If so I’ll probably find myself twitching and spinning in my retirement lounger.
About the Editor
Paul earned his B.S. and an M.S. in electrical engineering from the University of California-Berkeley and is a registered professional engineer. He has worked in the energy industry for more than 25 years, developing and implementing advanced energy technologies. As research director for Pacific Gas and Electric Co. he pioneered methodologies used in the design, maintenance and control of energy delivery systems. As a consultant he has provided guidance to utilities and the vendor community, nationally and internationally. Email him with comments: Paul.Mauldin@penton.com