The last six VM Insights articles have focused on how regulators can support and improve utility VM. The three following articles, beginning with this one, will focus on the use of benchmarking to inform regulatory processes, such as rate cases. While the comments are intended to discourage benchmarking for such purposes, it is not to say that benchmarking has no value or place in improving UVM programs. Indeed some detailed UVM benchmarking has been provided to the UVM industry, which recognizes and overcomes or as minimum, attempts to compensate for the pitfalls that will be outlined in this series of articles. Good benchmarking serves UVM practitioners and exposes new, emerging and/or common challenges and issues.

Electric utilities do not operate in markets where they are free to set the price at which they sell their product and service. Co-ops must justify rates to their members. Municipal utilities receive oversight from elected civic officials and investor owned utilities must justify rates through a state or provincial regulatory process.

The commonality between these oversight bodies is that they serve to represent the interest of the ratepayer, to ensure utilities provide a reasonable level of reliability in service at a reasonable price. Determining what constitutes a reasonable service and price is particularly challenging for VM programs.

It is not uncommon for utility regulators to request performance comparisons to other utilities. It is assumed such comparisons will serve to monitor progress in efficiency or provide meaningful information to regulators, ratepayers and shareholders. However, in the field of VM, the information gathered generally fails to illuminate or inform decision-making. All too often the benchmarking studies are designed without any VM expertise. Consequently, such studies do not provide guidance on what the most efficient and effective utilities are doing rather they serve to provide a template to becoming, at best average. Why is that so? Is it possible to compare VM program results between utilities and what would constitute a sound basis for such comparisons?

Answering these questions requires an understanding of what makes up the VM workload; the drivers of this workload; how and what trees cause tree-related outages and under what circumstances. This information is presented in detail in Vegetation Management Concepts and Principles and Managing Tree-Caused Electric Service Interruptions and will be used here without further qualification or detailed reiteration.

There are several general practices in utility benchmarking that make the data provided unreliable. Typically, utilities are sent a survey to complete. Completing the survey is a cost to the participating utility. The benefit derived is that the firm undertaking the survey or benchmarking usually commits to providing all the respondents the results and thus the utility will have comparisons to its peers. This process is rife with barriers to obtaining meaningful data, including:

  • The level of commitment to providing accurate, detailed data will vary with the utility, the cost of providing the data, etc.
  • There is no control on who answers on behalf of the utility. Varying levels of commitment, urgency and competency produce variability in the veracity of the data.
  • No audits are performed to verify the data. This allows utilities to state maintenance cycles that are theoretical, an operational fantasy, instead of the operational duration in fact. It also allows for estimates or outright guesses to be supplied. There is no way for the reader of the study to distinguish such a response from an accurate fact-based response.
  • In the field of VM there are very few industry defined terms. A key missing is an industry-wide definition for a maintenance cycle. Consequently, two utilities reporting a three-year and a six-year pruning cycle may in fact be doing the same thing – pruning every tree on a circuit every six years and re-doing 35% of them three years later. One utility might call this a 3-year cycle while the other considers it a 6-year cycle with a mid-cycle cycle buster or hot spotting program.
  • Questions seeking to establish efficiency or productivity are denominated in dollars, yet there are no questions that serve to make explicit differences in local labor rates.

These general deficiencies in benchmarking VM are adequate reason to reject inter-utility comparisons as a means of improving rate case decision-making. If, however, one wishes to explore whether or not VM benchmarking has any merit whatsoever there is a need to look in more detail, first at what does not work so that that which might, may emerge.