Chip Knappenberger has posted about some papers, including Santer et al. (2011, Separating Signal and Noise in Atmospheric Temperature Changes: The Importance of Timescale. Journal of Geophysical Research, doi:10.1029/2011JD016263) which proves one thing: that Knappenberger doesn’t get it. In more than one way.



He doesn’t understand the Santer et al. paper, he doesn’t understand the implications of Foster & Rahmstorf (2011), he fails to comprehend the value and validity of computer models, and he is clueless about the danger posed by global warming — even in a “best case” scenario.

The reasoning to support his arguments is this:



What makes the Foster and Rahmstorf work particularly encouraging for lukewarmers is that the authors find that for periods of 30 years or so, the removal of natural variability makes little difference on the magnitude of the observed trend in the lower atmosphere. However, thinking back upon the results from Santer et al., the same is probably is not entirely true for all of the climate model runs for the 1979-2010 time period. Almost certainly, the combination of random variability has added some amount of noise to the trend distribution even at time frames of 30 years or so. What this means, is that if the modeled temperatures were also stripped of their natural variability, then the 95% range of uncertainty (the yellow area depicted in Fig. 2) would contract inwards towards the model mean (green line). The net effect of which would be to make the observed trends (red and blue lines in Fig. 2) over the past 30 years or so lie even closer to (if not completely outside of) the lower bound of the 95% confidence range from the model simulations. Such a result further weakens our confidence in the models and further strengthens our confidence that future warming may well proceed at a modest rate, somewhat similar to that characteristic of the last three decades.



This is some of the most ludicrous nonsense ever written. What Knappenberger is really saying is that since natural variability contributes to uncertainty, we can “imagine it away” — even if we don’t know what it is! This is nothing more or less than shrinking the confidence interval based on wishful thinking.

Sure, if you account for natural variation you can shrink the error range, but the mean itself will also change, and you don’t know where the confidence interval will end up unless you actually know the natural variation. Claiming that F&R2011 demonstrates that the mean will not change, and that we can safely conclude which way the confidence interval will change, is nothing more than wishful thinking. It’s utter folly to extrapolate from “we could account for natural variation if we knew what it was” to “we can therefore shrink the error range even though we don’t know what the natural variation is.” Such a claim calls to mind the classic phrase “not even wrong.”

And by the way — Knappenberger is also flat-out wrong about the yellow area in the graph from Santer et al. being the “95% range of uncertainty.” It’s the range of model results from 5% to 95%, which leaves 5% on both the high and low ends, which makes it a 90% range, not a 95% range — the actual 95% range would extend from 2.5% to 97.5% of the observed results.

The point of Knappenberger’s nonsense logic is to claim this:



But what’s worse is that a model/observation disparity could indicate that the climate models are not faithfully reproducing reality, which would mean that they are not particularly valuable as predictive tools. My conclusion (which, is different from that of the authors) based upon the research presented by Santer et al.—that the models are on the verge of failing—is further strengthened by the results of another paper published in 2011 by Foster and Rahmstorf.



The failure of Knappenberger’s logic boggles the mind. Suppose we were talking about weather models rather than climate models. There are certainly model/observation disparities, all the time, and we could easily identify something the models don’t do particularly well, focus attention on that to the exclusion of all else, and by Knappenberger’s logic conclude that “they are not particularly valuable as predictive tools.” But every weather forecaster knows that in spite of their imperfections (which are legion), computer models are by far the best predictive tools we’ve got. My guess is, that even Anthony Watts wouldn’t deny that.

Climate models, despite their imperfections (which are legion), are also the best predictive tools we’ve got. And in spite of the fact that they don’t give especially good answers to some things, they also do give especially good answers to other things. Calling them “on the verge of failing” tells us nothing about reality, but quite a lot about Chip Knappenberger’s preconception.

Here, for instance, is a comparison of surface temperature (not tropospheric temperature) from AR4 model runs simulating the 20th century, to GISS temperature data:

Not only is the GISS temperature well within the envelope of model results, it’s quite close to the multi-model mean — which needn’t be the case because reality is only one “realization” of the climate system. In fact GISS temperature is stunningly close to the multi-model mean, as is shown by the difference between them:

The only visually notable discrepancy is from 1937 to 1945, the period during which a change in the way sea-surface temperatures were measured may have contaminated the observed temperature record. We all look forward to new estimates of sea surface temperature which are designed to account for this data discrepancy. If the revised 1937-1945 data are in even better accord with model results (which I expect), it would be a spectacular endorsement of climate models — and yet another case in which the reason for model/observation disparity was that the models were right, the observed data were faulty. But I suspect even that won’t make Chip Knappenberger budge from his belief that the models are “on the verge of failing.”

The real heart of Knappenberger’s post, and perhaps the most foolish failure of his reasoning, is this:



So what I have documented is a collection of observations and analyses that together is telling a story of relatively modest climate changes to come. Not that temperatures won’t rise at all over the course of this century, but rather than our climate becoming extremely toasty, it looks like we’ll have to settle (thankfully) for it becoming only lukewarm.



We’ve already warmed (at the surface) by about 0.9 deg.C since 1900. Earth is currently warming at 1.7 deg.C/century. Over the next century it’s extremely likely that we’ll warm even faster. But even if we continue to warm at the present rate, that will add another 1.7 deg.C to global average surface temperature, making a total 2.6 deg.C. I doubt that will be the case, in fact I consider the probability to be extremely low — less than 5% — but it represents the best case we can realistically hope for. The idea that this is “lukewarm” and that it won’t spell major disaster for humanity, is ludicrous. The global temperature change from full-glacial to full-interglacial conditions is about 5 deg.C. If you really believe that heating up the planet by half the difference of a full glacial cycle would be “relatively modest climate changes,” then you’ve got no damn business influencing climate policy.

I think such a change is overwhelmingly likely to bring disastrous changes for human civilization, especially for the availability of FOOD and WATER, and it doesn’t get any more basic than that. And that’s the best we can hope for — it could be far, far worse.