Reliability is measured in outage incidents, outage duration and customers affected. These records plotted by year provide an excellent relative measure of the success of the VM program. Historically, this data did not represent a sound foundation for comparing the effectiveness relative to outside VM programs. The variability in outage reporting had always been a concern even within a utility. Hence, these measures could be used on a relative or historical basis providing there was no reason to think that outage reporting had changed for better or worse.

Technological advancements have provided systems that automate the capture of outage data. While these systems have made outage data far more accurate and reliable, they do not facilitate inter-utility comparisons because the statistics in themselves do not provide the context. Utilities that have higher tree exposure (trees/mile) will have both a higher absolute number of outages and a higher ratio of tree-caused outages relative to all unplanned outages. Can you determine whether a New England utility where tree-related outages are 26% of all unplanned outages has a less effective VM program than an Arizona utility with 8% tree-related outages? For the basis of comparison it is necessary to have an inventory of trees capable of growing into or falling onto the lines. Comparing utilities on the number of tree incidents per 1000 trees of exposure would constitute a rational, meaningful approach. However, even this metric would need to be carefully weighed to reflect differences in tree species, environmental conditions experienced and the occurrence of pest infestations.

While some variables or means for making comparisons between utility VM programs have been provided, they are more data-intensive and require a higher level of statistical analysis. The criticisms of VM benchmarking cannot be easily overcome. If utility VM programs are to be compared, the following factors are required or must be accounted for.

  • Very similar tree exposure
  • Similar clearance standards
  • Similar urban-rural mix
  • Similar customer density
  • Known and similar growth rates
  • Similar geographic area and environmental conditions
  • Defined and thereby, standardized and comparable terms i.e. hazard tree, danger tree, risk tree, maintenance cycle
  • Uniform measures of productivity i.e. man-hours per unit, which removes the influence of labor rates
  • Similar units of measure for VM practices i.e. acre, tree pruned, tree removals by similar size categories
  • Similar political and regulatory environment i.e. no rules eliminating or severely limiting any integrated VM practice such as herbicide applications
Benchmarking that does not address these considerations cannot inform the decision-making process regarding the appropriate size, scale and cost of a VM program. While making use of such benchmarking data in the absence of anything else may have enormous appeal to regulators as an avenue of demonstrating due diligence, its worth must be recognized.

When the nature of the source and expansion in the vegetation management workload is understood, then a new approach for ensuring the effective use of ratepayer dollars appears for the regulator. There is a specific amount of VM work that needs to be completed every year to achieve a least cost sustainable VM program. Failure to remove the annual workload volume increment results in exponentially expanding costs. The questions of relevance to both utility management and the regulator become:

  • How do we determine if the current utility VM program is a sustainable program?
  • How do we determine if the current utility VM program is the least-cost sustainable program?
  • How does one determine the annual workload volume increment?
  • How does one assess utility VM productivity?
  • What are unit costs?
  • Are there historical tracking metrics that will ensure the least-cost sustainable program and provide a snapshot of program status?
Contrary to inter-utility benchmarking, answering these questions will simultaneously provide a clear path to both an effective VM program and effective regulatory oversight of the utility VM program.

There is a need to educate regulators and utility executives to these realities. It can only occur with the input of seasoned utility foresters, as these realities remain largely indiscernible to those with no expertise in the domain of utility vegetation management.

This series of articles has focused on discouraging the use of benchmarking to inform regulatory decision making. However, the use of benchmarking by utilities to identify industry trends, practices and common or emerging issues for the purposes of continuous improvement is a valid application. When the benchmarking study has been designed by UVM professionals and the results are evaluated in the context of the potential pitfalls that have been outlined, it provides utility management carefully considered guidance for VM program improvement.