“The surfacestations paper – statistics primer“ (May 12, 2011). Was Anthony Watts hiding his light under a bushel when he announced the surfacestations paper was in press? Is this post the meaty one, and the announcement mis-fire just rope-a-dope? Heres comes co-author Dr. John Nielsen-Gammon’s science! (Apparently John, a meteorologist who is the politically-appointed “Texas State Climatologist”, came on-board after Anthony’s own statistical efforts were tossed.) [Update: apologies for following Anthony’s misspelling of his co-author’s name.]

First of all John admits a “subtle point”. It turns out they “didn’t assess the differences in individual station measurements”, which unfortunately was what Anthony had been shouting about for years. Oh really?

John also admits that “NCDC’s preliminary analysis of siting quality used a gridded analysis, but we checked and our numbers weren’t very different.” (emphasis mine). Oh really?

Another curious admission from John is that “you have to work with anomalies or changes over time (first differences) rather than the raw temperatures themselves.” This seems strange, because the denialist howling has always been that only the raw temperatures can be trusted.

Fourth, John tells us that “a station should matter more in the overall average if it is far from other stations, and matter less if lots of other stations are nearby.” Isn’t weighting stations how the mainstream climate scientists rigged the numbers? Oh dear, there’s a pattern emerging. Regular science.

What was the ingenious analysis that pulled all this together into the final nail in the coffin of global warming? The “Monte Carlo approach”.

In fact, it’s so simple you don’t need to know statistics to understand it. Given two classes of stations whose trends needed comparing, I randomly assigned stations to each class, while making sure that the total number of stations in each class stayed the same and that each climate region had at least two stations of each class. I then computed and stored the difference in trends. I then repeated this process a total of 10,000 times.

Then of course you cherry-pick the few comparisons that randomly show the trend you want to claim is real and ignore the other 9,990. This is what denialist statistician-to-the-stars Steve McIntyre did in his attacks on Dr. Mann’s temperature reconstructions, and was so ham-handedly reproduced by Dr. Edward Wegman.

So what’s left? Weak mutterings about how “many stations underwent simultaneous instrumentation and siting changes” in the 1980’s. Apparently no-one knew this (not).

This paper is sounding more and more like an ass-covering way to justify several years of wasted and misguided effort. The demonized scientific process has pinned Anthony and his team like bugs under a magnifying glass, holding them accountable for every squeak. The result? Laryngitis.