The last three posts were mostly about the adjustments of the ocean data done in the Karl 2015 paper. This because the adjustments in ocean data had the biggest impact on the result (that there wasn’t something like a “hiatus”). Kevin Marshall of the excellent blog manicbeancounter.wordpress.com reminded in a comment on previous post that surface datasets had issues as well.

I could agree with that one, I also had written a post in the first year of blogging: Things I took for granted: Global Mean Temperature,, that described how my perception of a global mean temperature changed from believer until skeptic and why I had a hard time to believe that the (surface) datasets were accurate enough to capture an 0.8 °C increase in temperature over 160 years.

Reading it back I was a bit surprised that I wrote this already in my first year of blogging. But, in line with the Karl et al paper, there were two things that I think were missing in this early piece.

First, that the data in the surface datasets are not measurements, but estimates derived from the temperature station measurements. In a way that could be concluded from the uneven spatial coverage, the convenience sampling and other measurement biases like Urban Heat Island, Time of Observation and who knows what more. This makes that the homogenized end result will just be an estimate of the actual mean temperature.

Okay, not really world-shaking, not really unexpected, but important to know. When I was a believer, I assumed that scientists knew exactly what the mean global temperature was until now and that they could compare the current temperature with those of the past. How many people from the public realize that the mean temperature is not a measurement, but an estimate? How many people know that the surface dataset changed so often? And that this adjustments change the conclusion? How many journalists in the mainstream media give some background information when they report headlines like “Warmest year ever”?

Second, and in line with the first point, that estimate is a moving target as shown clearly in the NOAA and NASA/Giss data. Those changed substantially over time, from a cycle in last century to almost a straight line now. And now the recent attempt at disappearing the “hiatus” in the NOAA data, by ignoring high quality data and adjust low quality data.

This indicates that the dataset is ever changing, although the measurements stay the same. This indicates that this data is not of a good quality in the first place, therefor in need of adjustments to correct for the problems with it. This raises some questions like for example: how do we know whether the current estimate is correct? Apparently the estimates in the past were not correct and had to be adjusted. If the estimates are that pliable and so easily (re)adjusted, then how reliable are the results?