Resilience is finally taking a front stage in the electric power industry. Society recognizes the need to significantly increase investments in resilience as the electric grid ages and faces extreme weather conditions, which impose significant costs on electric utilities and the customers they serve. The U.S.’s Infrastructure Investment and Jobs Act (IIJA) of 2021 confirms the big push to strengthen electric grid resilience, with more than US$47 billion allocated to resilience, including cybersecurity.
Using a nontechnical definition, resilience quantifies and qualifies the electric grid’s performance during and after an extreme event. The more customers remain with power during an event and the quicker power gets restored to those who lost it, the better the resilience performance.
Optimal system design delivers the best system performance at minimum cost both during and after a series of extreme events Such design goes far beyond traditional planning practices in utilities and engineering capabilities; it requires a data-driven approach and the use of advanced analytics. This is a major shift in the system design philosophy — simultaneous optimal design across asset management, reliability planning, capacity planning, customer care and system operations functions over a larger geographical area, and in a coordinated fashion between transmission and distribution teams.
Performance Matters
Resilience builds are often costly and necessitate thorough analysis to determine the most cost-effective solutions, ensuring that the system is resilient and performs reliably and predictably. While scoping resilience improvement projects is relatively straightforward, quantifying the return on newly invested capital (RONIC) — especially in terms of system performance during extreme events, a crucial aspect of resilience — can be a complex, data-driven task. Without a clear understanding of resilience RONIC, utilities may inefficiently allocate significant capital, and regulators may face challenges when approving capital expenditure proposals, as they may not fully comprehend the expected performance outcomes at different cost levels.
Advanced Analytics
Luckily, today’s technological advancements and optimization solvers can find optimal solutions to extremely complex problems faster, cheaper, and without approximation. In the case of resilience, the optimal solution, which balances minimal investment and optimal system performance, can be derived using a mathematical model that represents an entire study area while satisfying all constraints, such as voltage and thermal power flow limits. This applies not only during an event but also during system restoration when numerous system reconfigurations occur. The design decisions result from optimal solutions that take into account system responses, restoration activities, and costs that occur over the entire duration of an event, which can sometimes span multiple days.
The use of unbalanced optimal power flow is necessary to find solutions faster with superb solution accuracy. Searching for an optimal solution can be quite computationally intensive, depending on the number of options to consider. Some resilience strategies may include the use of one or more of the following: hardening, sectionalizing, undergrounding, distributed generation, energy storage placement, or microgrid formation. For example, on a relatively small distribution system with three distribution feeders , there could be more than 8,000 binary decisions with dependencies, creating a vast combination of decision-making options regarding which poles to harden and where to further sectionalize the system. This is a challenging task even without considering power flow constraints, but it becomes far more complex when utility staff wants to ensure that no voltage and thermal issues will occur when the system goes through the reconfiguration process. This is why the most challenging aspect of resilience design is optimizing the solution for system performance during an extreme event and the system restoration process.
Clearly, this is a task where machines outperform humans. With the use of augmented intelligence and advanced analytics, it is possible to find optimal solutions for such complex problems while addressing several event scenarios in a relatively short time, at a lower cost, and with fewer human resources.
In a three-feeder example, it may take only several hours of computing time to find an optimal solution that satisfies power flow constraints during and after an extreme event using augmented intelligence and advanced analytics. This is in stark contrast to what would otherwise be weeks of delivery time, significant labor costs, and unavoidable process simplifications that could be detrimental. Problem simplifications related to the interplay between investment decisions and restoration/societal costs can have serious consequences.
Optimizing for upfront investment in system upgrades, utility restoration costs, and societal costs due to prolonged power loss can significantly reduce the overall cost in the long run. In some scenarios, optimal upfront investment solutions can result in a two- to three-fold reduction in overall costs for extreme weather events, such as a Category 4 hurricane. However, the main bounding constraint in the optimization process and overall investment decision-making should be societal willingness to pay year over year for such system performance. In business terms, society should only seek new investments in solutions that provide the best resilience performance RONIC while ensuring that long-term rates remain affordable for ratepayers.
The use of augmented intelligence and advanced analytics is inevitable and, alongside change management challenges, is associated with data availability and quality as a key challenge.
Data-Driven Design
Resilience design is also heavily data-driven because the planning process aims to identify idiosyncratic and systematic risks, and define solutions to hedge against those risks. These risks must be quantifiable, which requires a significant amount of quality historical data and reliable forecasts, including but not limited to the following:
- Geographical information system (GIS) data of individual T&D poles (for example, material, class, height and guy-wire attachment) and attached equipment (for example, transformers, capacitors and regulators) on each of the poles.
- Distribution circuit power system topology and characteristics, such as those used in CYME and Synergi Electric models.
- Asset condition and exposure to risk in the field, such as pole leaning angle and vegetation proximity.
- Load growth and mix forecasts, including electrification and corresponding categorization of customer senstivity to power loss.
- Societal and geospatial costs for all customers.
- Geospatial network upgrade costs, such as undergrounding and pole upgrade/replacement costs.
- Restoration times, crew availability for given events and hourly rates.
While utilities continuously collect and store data records, data is often incomplete or inaccurate, posing a major challenge for electric utilities in resilience planning. Often, the process requires data conditioning using advanced analytics. For example, GIS pole geo-locations may not align with distribution model line section geo-positioning, preventing proper pole-to-line section mapping. Using advanced analytics, the pole-to-power flow model mapping process can improve accuracy to over 99% in only one to three seconds per distribution circuit, depending on the distribution circuit’s size.
Analytics depends on data. As extreme event simulations unfold, the probability of asset failure is established for each pole and circuit section independently. For instance, each risk model for distribution pole failure accounts for wind and ice loading impacts, including attached equipment like capacitors, regulators, and third-party conductors. Subsequently, numerous time-series events of extreme events are simulated, each posing a time-series problem. Using advanced analytical methods,
the most cost-effective system design solution is sought across geographical space and event time.
In addition to machine execution, the resilience process heavily relies on an effective and efficient process involving people from different parts of the organization.
Consensus Building
Utility department activities and priorities are highly dynamic. Slow and asynchronous activities across various departments can delay timely consensus building. Resilience design is even more complex and cross-department dependent. Therefore, time is critical, and solutions must involve all departments simultaneously for assessment and refinement. Otherwise, a partial and sequential resilience design will likely result in lower system performance and higher investment and operational costs, not to mention the impact on societal costs.
Work In Practice
LUMA Energy LLC of Puerto Rico piloted a resilience design framework developed by Quanta Technology, a Quanta Services Inc. company. As a tropical island, Puerto Rico is periodically exposed to storms and occasional hurricanes. Following Hurricane Maria in 2017, Puerto Rico’s utility at the time worked diligently to restore the electric grid, but it took months to bring some customers back online. The post-Maria system recovery was more of a Band-Aid solution rather than a comprehensive rebuild.
When LUMA, a joint venture between Quanta Services and ATCO Ltd., assumed operations and maintenance of the island’s power grid in 2021, it introduced systematic changes to how the system is designed, both for everyday conditions and extreme events. With the assistance of Quanta Technology’s integrated resilience planning platform, LUMA can not only enhance resilience activities, encompassing planning, operations, and emergency response, but also use the platform to estimate the proposed reliability and capacity planning network upgrades’ contribution to system resilience. These capabilities empower the utility to make informed decisions in the best interest of Puerto Rico’s ratepayers.
Continuing Work
For all utilities and regulatory bodies, the standard for decision-making has been elevated when it comes to determining where to invest in resilience efforts and prioritize upgrades to the electric grid. There is no doubt that billions will be spent on resilience over the next several years. Utilities will face capital spending challenges and rising expectations from regulators for superior electric grid resilience performance. While optimal capital spending analysis and justification for appropriate levels of resilience may not be mandatory, they will be unavoidable. Without these efforts, resilience will come at a cost that very few geographic areas can bear.
Andrija Sadikovic is a director at Quanta Technology. He is a distinguished professional with a track record of innovation and leadership in the field of electrical engineering and grid management. His expertise encompasses transmission and distribution planning, as well as smart grid technologies. Under his leadership, Quanta Technology has achieved milestones, notably the development of an augmented intelligence platform dedicated to system resilience and reliability planning, reducing labor costs and completion time while improving the decision-making process. Sadikovic holds an MBA from The Wharton School, University of Pennsylvania, and an MSEE from Northeastern University.
Shanshan Ma is a principal engineer at Quanta Technology. She holds a Ph.D. in electrical engineering from Iowa State University, specializing in electric power systems. With extensive research experience, Ma has focused on distribution system planning, resilience, renewable energy integration, and power system optimization, accumulating nearly 1000 citations for her publications to date. She serves as the technical lead for the development of Quanta Technology’s reliability and resilience platform.