Demand response (DR) is already in use across most utility territories, and its role is growing. As load growth accelerates and infrastructure ages, it is becoming one of the most accessible tools for managing peak demand. Unlike new generation, DR can be deployed quickly and does not require costly infrastructure.

But performance is not always consistent. For example, during Winter Storm Elliott in 2022, DR resources in PJM delivered only 26-32% what was expected1. The issue isn’t that utilities lack data; it’s that insights arrive months after events, when it’s too late to respond. When utilities can’t see performance clearly or compare results across programs in real time, they lose the chance to adapt strategies while it still matters.

Delayed Results, Divergent Forecasts

The challenge of mis-aligned performance results becomes clear when you look at how a typical DR season unfolds.

1. PRE-SEASON: Expectations get set early

Each demand response season begins with forecasts informed by prior results. Planning models reflect how demand response is expected to perform under peak conditions.

Once the season begins, those assumptions largely hold, even as participation, customer behavior, and weather patterns shift in real-time. Performance expectations tend to remain fixed across programs that are often managed and reported separately.

2. DURING THE SEASON: Events happen, but learning lags

DR events are triggered (in response to price-based signals, peak weather events, etc), and deliver load reductions to support system reliability. Results are visible in aggregate, but the drivers of performance (across regions, customer segments, etc.) are difficult to parse out.

Early signals about portfolio performance are often available, but standard reporting rarely shows where performance lagged or what drove results. Without insight into localized delivery or missed opportunities, programs continue without adjustment.

3. POST-SEASON: Clarity arrives too late

Many months after the season ends, program evaluations can bring clarity. Performance drivers are better understood and lessons get documented.

But by this point, planning timelines are already moving forward, and the next season begins with many of the same assumptions in place. The feedback loop is slow, hindering program improvement from season-to-season. 

The Consequences

As demand response takes on a larger role in mitigating grid capacity constraints, delays in performance insight become more costly. Without timely data, planners default to conservative assumptions to protect reliability, often leading to unnecessary investment in supply-side resources and higher system costs.

Even when demand response delivers, delayed or inconsistent measurement erodes confidence. Programs get derated or sidelined, and DR budgets get lowered. Proven, cost-effective resources are left underused—even in areas where they could help address local grid constraints—simply because their value isn’t visible when it counts.

What It Takes to “Fix” Demand Response

To make demand response a more reliable resource, performance must be measured the same way, and fast enough to act on. When utilities can learn faster (between events and during active program seasons), demand response begins to function less like a collection of programs and more like a portfolio resource. 

So how can we shorten the feedback cycle? Let’s apply our framework from our previous article to show how it improves demand response outcomes.

1. Understand delivered impact, down to the meter.

Using usage data from before, during, and after demand response events gives utilities a clear view of what was actually delivered—and in context. When baselines and normalization are applied consistently, true response can be separated from weather and load noise. Instead of relying on portfolio totals or program-specific reports, utilities can compare performance across regions and customer segments using the same approach.

Example: With immediate access to meter based performance data, a utility uncovered meaningful differences in how commercial customers delivered demand flexibility. Large industrial participants sustained 35 percent savings across events lasting throughout the event window, while retail customers delivered more than 25 percent reduction in the first hour but returned to baseline after the first hour. Instead of treating demand response as a uniform resource, the program team began dispatching customers to achieve sustained performance across segments.

2. Learn faster while programs are still active.

When performance insight arrives within days rather than months, utilities can begin to see patterns early in the season. Assumptions can be tested against observed outcomes instead of carried forward unchanged. Learning happens between events, while there is still time for it to matter.

Example: Rapid program feedback revealed that program value was concentrated among a relatively small group of participants. While the median customer saved less than 2 kWh during event hours, average savings reached 19 kWh due to outside impacts of very large responders. It also showed that more than a quarter of customers showed negative savings. With visibility into individual performance, the utility shifted recruitment, and incentive efforts toward high impact customers while reassessing persistently low performers.

3. Turn results into planning-grade insight.

When performance is measured consistently across programs and partners, results can be rolled up over time into a single, defensible view. Forecasts can be updated based on observed delivery, and planning teams can clearly explain how assumptions evolved.

Example: By the end of the season, a utility consolidated measured DR event results across multiple programs using a standardized measurement approach. Instead of relying on a single static assumption, planners incorporated observed performance ranges into the next planning cycle, improving confidence in DR forecasts and reducing the need for conservative buffers.

With faster, portfolio-wide insight, utilities spend less time reconciling conflicting reports and more time building confidence in results. Demand response becomes easier to explain and easier to rely on because its performance is visible and consistent across the portfolio.

What Comes Next

Demand response is often a practical starting point for improving performance insight because events are discrete and time-bound. Distributed energy resources introduce greater complexity, because their impacts are continuous and hard to isolate.

Next up: Bringing DER Performance Into the Open. Why portfolio-level measurement becomes even more critical as behind-the-meter resources scale.

Connect with our team to learn how FLEX can support you and your team’s demand-side goals.


  1. https://www.esig.energy/wp-content/uploads/2025/02/ESIG-Demand-Response-Wholesale-Markets-report-2025.pdf ↩︎

Discover more from Recurve

Subscribe now to keep reading and get access to the full archive.

Continue reading