If people who reject climate science ever point to actual data, you can just about bet the farm it will be data from satellite measurements of upper-atmosphere temperatures. At least until the record-setting global heat in 2015 and 2016, some of the satellite data was amenable to the claim that global warming had magically ended in 1998.

That was always nonsense, involving cherry-picking a start year and ignoring ongoing corrections to the complex satellite measurements. That said, it is certainly fair to compare the satellite records to climate models to see what we can learn.

In the early 2000s, a run of La Niña years temporarily held global temperatures slightly below the long-term trend. The climate model projections prepared for the 2013 Intergovernmental Panel on Climate Change report, which projected temperatures using future scenarios from 2000 to 2005 forward, ran a little above the satellite data. Is that just because of the La Niña conditions in the Pacific, or are the models off in some way?

To find out, a group of researchers led by Ben Santer of Lawrence Livermore National Laboratory carried out a careful analysis of those models and several satellite records of temperature in the upper troposphere, about 5 to 10 kilometers (3 to 6 miles) above the surface.

Projecting

To understand this story, we need to understand how the IPCC’s climate model projections are produced. Many different climate models were run under several scenarios of climate “forcings.” Forcings are things like greenhouse gas emissions, volcanic eruption rates, and solar activity, all of which affect the total amount of energy entering and leaving the Earth’s climate system. Each individual model run will include simulated natural variability from year to year, but averaging all the model runs together leaves a smooth line. Even though there were a few hundred simulations with a strong El Niño in 2016, they will be cancelled out by a few hundred simulations with a La Niña in that year, for example.

The smooth line is the “signal,” but it comes at the cost of getting rid of the “noise” that makes each year distinct. In other words, it’s the long-term trend rather than a prediction of the exact global temperature for a given year.

Since the real-world temperatures are like one single model simulation, no one should expect its wiggly data to perfectly match a smooth projection line (not that this stops some people). So to compare the models with real-world data, you have to average multiple years together. To avoid cherry-picking start and end points, the researchers used rolling averages of multiple lengths. They calculated 10-year average trends by sliding the decade forward one year at a time, and they went as long as 18-year averages.

That comparison showed no real difference in the 1980s and ’90s, but it showed a small but significant gap in the 2000s. To get at why, the researchers looked carefully at that gap. If it was just natural variability to blame, the satellite data should jump above the model line about as often as it drops below. There is also no reason for the second half of the time period to look different from the first half.

But the model average is warmer than satellite data much more often than it is cooler, which suggests the difference is not random. But this is only true for the latter half of the comparison. So, the researchers calculate, there is less than a 10 percent chance that the mismatch is due to natural variability alone. The model average runs a little high in recent years.

Blame the volcanoes and Sun

Does this mean the problem is that the models are too sensitive to CO 2 , simulating too much warming in the upper troposphere? The researchers find no evidence for that. First of all, overly sensitive models would have run hot in the 1980s and ’90s as well. And major eruptions like El Chicon in 1982 and Pinatubo in 1991 provide their own tests—the models didn’t overreact to the short-term cooling influence of those events. Finally, if you break out the individual climate models, the mismatch is not greater in the models with stronger CO 2 sensitivity.

Instead, the researchers say the best explanation is a bit of natural variability plus an issue we already know about—some of the scenarios for natural forcings (volcanoes, solar) used in those simulations have so far guessed wrong. Volcanoes have kicked up a little more sunlight-reflecting sulfur, and solar activity has been a little quieter—neither of which could have been predicted in advance. Put the two together, and you get a slight cooling influence compared to the model projections for this time period. Correcting these inputs has been shown to improve the match with surface temperatures, and the same would be true for the upper troposphere tracked by the satellite measurements.

So in the end, the mismatch for these upper-air temperatures is real, but the reason for it is pretty nuts-and-bolts—and doesn’t change the amount of warming we expect to see as greenhouse gas emissions continue. As the researchers are careful to remind us, “Although scientific discussion about the cause of short-term differences between modeled and observed warming rates is likely to continue, this discussion does not cast doubt on the reality of long-term anthropogenic warming.”

Nature Geoscience, 2017. DOI: 10.1038/NGEO2973 (About DOIs).