But whether the cost of the upgrades is worth the energy they save — whether, in effect, they “pay for themselves” — is an important question. There are many programs around the country that provide homes with efficiency-boosting upgrades. Until now, these programs have often been evaluated using theoretical models, and there’s been very little fieldwork to actually observe the savings they produce. This is problematic because it means policymakers don’t actually know how their programs are performing in real life.

“There’s a lot riding on energy efficiency programs, and we need to understand better what we know and don’t know about them,” says Karen Palmer, research director and senior fellow at Resources for the Future.

AD

AD

Now, researchers from the University of Chicago and the University of California, Berkeley have released a field study today comparing the cost of efficiency upgrades with the energy they save — and its results suggest that savings may not actually be worth the costs.

The study evaluated a sample of more than 30,000 Michigan households enrolled in the federal Weatherization Assistance Program (WAP), which targets low-income families. And it found that the costs of the energy efficiency investments introduced to the houses (which included upgrades such as new furnaces and water heaters) were twice the value of the energy they saved. More specifically, each household received approximately $5,000 worth of energy-saving upgrades, paid for by the government, but the average value of the energy saved for each home, which was computed using the price of energy in Michigan and the amount of energy saved in each case, was only about $2,400. This means that for this sample, at least, the costs were not actually worth the benefits.

And it wasn’t just a problem with cost: The researchers found that the actual energy savings they observed in the field were less than half what had been projected by models. The reason for the discrepancy is still unclear. In the past, some scientists have predicted that a kind of “rebound effect” — which is when residents start consuming more energy once they upgrade their homes — could cause energy savings to be less than models might predict. However, the authors of this paper conducted surveys to determine if that’s what was going on, and it turns out the rebound effect was not a significant factor in this case.

AD

AD

It’s important to note that these results may not apply to all energy efficiency programs across the nation, says study co-author Meredith Fowlie, an associate professor of economics at the University of California, Berkeley. “This is one study in one state looking at one subpopulation and one type of measure,” she says. “I would not feel comfortable generalizing from our study in Michigan.”

Richard Newell, a professor of energy economics at Duke University and former head of the U.S. Energy Information Administration, agreed that caution is necessary when considering these results. Energy efficiency programs vary widely by nature, he stated in an email to the Post, and furthermore, “[Weatherization Assistance Programs] themselves vary substantially across states and as a class most energy analysts would not consider these programs to be at the top end of cost-effectiveness relative to other energy efficiency programs.”

Even so, Palmer from Resources for the future says she feels that “the difference that they find between the field savings and the observed savings in these households is big enough to suggest that we should be doing better evaluations.” In the future, a variety of similar studies should be conducted in different locations and with different energy efficiency programs, she says.

AD

AD

Paul Stern, senior scholar at the National Research Council, says more should also be done to figure out how different kinds of upgrades stack up against one another, since it’s possible that some types of energy efficiency investments may produce better results than others. For example, he says, researchers might investigate whether savings were higher with new furnaces as opposed to new insulation.

And the study raises some other interesting questions as well, Stern says. The fact that the models’ predictions on the program’s energy savings were so far off from the actual field observations is a problem that should be investigated. “If a programmer is using a model that’s this bad, what’s going on here?” he says. “Is the model really that wrong?”

There are a few possible explanations, even if the study can’t say for sure what’s actually causing the mismatch. One theory, which the authors put forward in the paper, is that the models are underestimating the efficiency of the homes before they ever get their upgrades. Another possibility, according to both Stern and Fowlie, is that there are mistakes happening when the upgrades are actually being installed. If the model doesn’t account for imperfections in the upgrades, themselves, it could account for some of the discrepancy.

AD

AD

Finding ways to conduct better field evaluations and improve models can have some immediate practical benefits, according to Palmer. For one thing, it could be useful to states as they look for ways to comply with the Clean Power Plan. “In the Clean Power Plan, some states may really want to focus on energy efficiency because they believe that’s the low-cost option — and maybe it is,” Palmer says. “But I think they owe it to themselves ways to evaluate that.”