People have quite reasonably asked about my connection with the surface stations article, given my puzzlement at Anthony’s announcement last week. Anthony described my last-minute involvement here.

As readers are probably aware, I haven’t taken much issue with temperature data other than pressing the field to be more transparent. The satellite data seems quite convincing to me over the past 30 years and bounds the potential impact of contamination of surface stations, a point made in a CA post on Berkeley last fall here. Prior to the satellite period, station histories are “proxies” of varying quality. Over the continental US, the UAH satellite record shows a trend of 0.29 deg C/decade (TLT) from 1979-2008, significantly higher than their GLB land trend of 0.173 deg C/decade. Over land, amplification is negligible.

Anthony had asked me long ago to help with the statistical analysis, but I hadn’t followed up. I had looked at the results in 2007, but hadn’t kept up with it subsequently.

When Anthony made his announcement of big news, I volunteered to check the announcement – presuming that it was something to do with FOIA. Mosher and I were chatting that afternoon, each of us assigning probabilities and each assigning about a 20% chance to it being something to do with the surface stations project.

Anthony sent me his draft paper. In his cover email, he said that the people who had offered to do statistical analysis hadn’t done so (each for valid reasons). So I did some analysis very quickly, which Anthony incorporated in the paper and made me a coauthor though my contribution was very last minute and limited. I haven’t parsed the rest of the paper.

I hadn’t been involved in the surface stations paper until after his announcement though I was familiar with the structure of the data from earlier studies.

I support the idea of getting the best quality metadata on stations and working outward from stations with known properties, as opposed to throwing undigested data into a hopper and hoping to get the answer. I think that breakpoints methods, whatever their merits ultimately demonstrate, need to be carefully parsed and verified against actual data with known properties (as opposed to mere simulations where you may not have thought of all the relevant confounding factors). To that extent, Anthony’s project is a real contribution, whatever the eventual results.

It seemed to me that random effects methodology could be applied to see the impact on trends of the various complicating factors – ratings category, urbanization class, equipment class. (Using the grid region as a separate random effect even provides an elegant way of regional accounting within the algorithm.) This yielded apparent confirmation in expected directions: a distinct effect for urbanization class in the expected direction; of ratings in the expected direction; and of max-min in the expected direction.



Figure 1. Random Effects of Urbanization, Rating, Equipment, Maax-Min.

Whenever I’m working on my own material, I avoid arbitrary deadlines and like to mull things over for a few days. Unfortunately that didn’t happen in this case. There is a confounding interaction with TOBS that needs to be allowed for, as has been quickly and correctly pointed out.

When I had done my own initial assessment of this a few years ago, I had used TOBS versions and am annoyed with myself for not properly considering this factor. I should have noticed it immediately. That will teach me to keep to my practices of not rushing. Anyway, now that I’m drawn into this, I’ll have carry out the TOBS analysis, which I’ll do in the next few days (at the expense of some interesting analysis of Esper et al.)

I have commented from time to time on US data histories in the past – e.g. here, here here, each of which was done less hurriedly than the present analysis.



