Automated, data-driven methods can reliably identify comparison groups to determine the net impact of energy efficiency interventions for evaluation and procurement, according to a new peer-reviewed study conducted by OpenEE for the Energy Trust of Oregon.
By showing that comparison groups can be produced quickly and consistently, the study has demonstrated an approach that can dramatically increase the speed and efficiency of billing analyses in order to provide faster and more consistent feedback to energy efficiency program managers and third party implementers, helping them to optimize interventions and supporting private investment, risk management, and procurement of efficiency as a resource.
The Need for Meter-Based Approaches to Net Impact
Meter-based approaches for determining energy savings have the potential to re-invent energy efficiency by allowing it to scale and making it more responsive to a rapidly changing grid. However, to function as a grid resource, the procurement of energy efficiency has to account for factors that impact energy usage outside the influence of a program, such as economic cycles, natural adoption of new technology, or other population-level changes.
The net impact of energy efficiency for meter-based performance procurement must reflect the incremental effect of the known intervention above and beyond population trends of energy consumption. It is also essential that this analysis can run longitudinally and transparently as efficiency is deployed, to deliver the greatest value to the system by creating a feedback loop.
OpenEE’s Approach
OpenEE’s approach to net impact begins with using the CalTRACK methods to arrive at site-level savings, normalized for weather and occupancy, and reflective of savings for customers at the meter. Automated site-level comparison group matching is then used to select non-participant groups to account for natural population-level consumption changes. When feasible (e.g. for retrospective analysis), it is also recommended to simultaneously use multiple comparison group identification methods, which provides more stable results.
Using a difference of differences with a comparison group allows for the longitudinal tracking of both site-level savings and the impact on load, net of population trends. These kinds of automated approaches are appropriate for meter-based efficiency because they can be deployed up front and are replicable and accessible to all players. Combining them with auditable custom and non-routine adjustments maintains the consistency of meter-based calculations and is a necessary prerequisite for creating the confidence needed to attract capital, manage risk, and establish pay-for-performance markets and ultimately scale energy efficiency investments.
Comparison Group Identification
The Energy Trust of Oregon and Open Energy Efficiency study, which was informed by the DOE’s uniform methods chapter on whole building analysis, tested a range of available comparison group matching approaches based on the consumption patterns of past participants and publicly available metadata to identify appropriate non-participant comparisons. Through out-of-sample testing, it was possible to quantify the relative accuracy of each approach at predicting future consumption patterns for treated buildings.
The primary takeaways of this study are twofold.
First, there is no single one size fits all matching method that resulted in the best fit for all sectors or intervention types. But it is clear that these matching approaches can yield very reliable and closely matching results, which can be developed in advance and tested using empirical data.
Second, it is possible for methods to generate comparison groups that can be tracked longitudinally with participants, allowing aggregators, utilities, and regulators to track the performance of the comparison group alongside that of program participants.
Recommendations
This analysis demonstrates that there are substantial advantages to this two-stage approach in the context of quantifying the net impact to load in close to real time.
If impact evaluations are used to adjust payments, completing them months or years after interventions occur, and using custom methods that can’t be replicated or predicted, will result in controversy. Confidence is eroded, especially when results do not match the expectations of aggregators or if different methods yield substantially different results. If an impact evaluation is being used to validate savings claims then reproducibility and transparency should be the primary focus, as this enables the methods to be contractually specified.
Regulators, evaluators, and utilities can test methods using past participants or representative customers and agree to a practical method for deriving a comparison group during the program deployment. By agreeing to comparison identification approaches in advance, it becomes possible to track both site-level savings and net impact longitudinally, giving utilities and the market the feedback needed to optimize results while it is still useful to do so. This can drive a continuous feedback loop that supports improved products and programs by tying product and program changes directly to consumption changes at the meter.