Most of you probably were wondering if I would comment on this recent paper/book. AOAS1001-014R2A0

A STATISTICAL ANALYSIS OF MULTIPLE TEMPERATURE

PROXIES: ARE RECONSTRUCTIONS OF SURFACE

TEMPERATURES OVER THE LAST 1000 YEARS RELIABLE?

BY BLAKELEY B. MCSHANE∗ AND ABRAHAM J. WYNER†

In my opinion it is a landmark paper in its efforts to quantify the uncertainty in the proxies. While this paper appears to be about paleo cliamte reconstructions, the limitations of reconstructions it re-exposes so dramatically actually point directly to models. I don’t claim to have figured the whole thing out, it isn’t without its flaws. However, the work of these authors was more than extensive with an excellent grasp of statistical prediction and quality of the raw data. In my case, I’m very lucky to have already put in the groundwork with the Mann08 data, it made the paper very easy to read. At the beggining of the paper the authors, in an almost blog like fashion, took time to frame the impetus behind the work.

This effort to reconstruct our planet’s climate history has become linked

to the topic of Anthropogenic Global Warming (AGW). On the one hand,

this is peculiar since paleoclimatological reconstructions can provide evidence

only for the detection of AGW and even then they constitute only

one such source of evidence. The principal sources of evidence for the detection

of global warming and in particular the attribution of it to anthropogenic

factors come from basic science as well as General Circulation

Models (GCMs) that have been fit to data accumulated during the instrumental

period (IPCC, 2007). These models show that carbon dioxide, when

released into the atmosphere in sufficient concentration, can force temperature

increases. On the other hand, the effort of world governments to pass legislation to

cut carbon to pre-industrial levels cannot proceed without the consent of

the governed and historical reconstructions from paleoclimatological models

have indeed proven persuasive and effective at winning the hearts and

minds of the populace. Consider Figure 1 which was featured prominently

in the Intergovernmental Panel on Climate Change report (IPCC, 2001) in

the summary for policy makers1. The sharp upward slope of the graph in

the late 20th century is visually striking, easy to comprehend, and likely to

alarm. The IPCC report goes even further: Uncertainties increase in more distant times and are always much larger than

in the instrumental record due to the use of relatively sparse proxy data. Nevertheless

the rate and duration of warming of the 20th century has been much

greater than in any of the previous nine centuries. Similarly, it is likely that

the 1990s have been the warmest decade and 1998 the warmest year of the

millennium. [Emphasis added]

It’s so true, Mann wouldn’t have become famous if the hockey stick had no meaning (as I’m sure he’s quietly wishing), or if the result weren’t so shocking in appearance. If you’re new to the discussion, when hockey sticks have been discredited, the argument by climate science™ usuall shifts to – it didn’t matter anyway because of all the other evidene. In reality, they do matter. They matter for model hindcasts which are the entire basis for the future projections.

The paper concludes in part:

On the one hand, we conclude unequivocally that the evidence for a

”long-handled” hockey stick (where the shaft of the hockey stick extends

to the year 1000 AD) is lacking in the data. The fundamental problem is

that there is a limited amount of proxy data which dates back to 1000 AD;

what is available is weakly predictive of global annual temperature. Our

backcasting methods, which track quite closely the methods applied most

recently in Mann (2008) to the same data, are unable to catch the sharp run

up in temperatures recorded in the 1990s, even in-sample. As can be seen

in Figure 15, our estimate of the run up in temperature in the 1990s has

a much smaller slope than the actual temperature series. Furthermore, the

lower frame of Figure 18 clearly reveals that the proxy model is not at all

able to track the high gradient segment. Consequently, the long flat handle

of the hockey stick is best understood to be a feature of regression and less

a reflection of our knowledge of the truth. Nevertheless, the temperatures

of the last few decades have been relatively warm compared to many of the

thousand year temperature curves sampled from the posterior distribution

of our model.

It’s about as damming a description of the paleo branch of climate science™ as you could ask for. There are numerous little daggers hidden in the text too.

As an initial test, we compare the holdout RMSE using the proxies to two

simple models which only make use of temperature data, the in-sample

mean and ARMA models. First, the proxy model and the in-sample mean

seem to perform fairly similarly, with the proxy-based model beating the

sample mean on only 57% of holdout blocks. A possible reason the sample

mean performs comparably well is that the instrumental temperature

record has a great deal of annual variation which is apparently uncaptured

by the proxy record. In such settings, a biased low variance predictor (such

as the in-sample mean) can often have a lower out-of-sample RMSE than a

less biased but more variable predictor. Finally, we observe that the performance

on different validation blocks

Considering that the majority of the Mann08 proxy record is trees, it’s an interesting point that warmer years aren’t individually captured in the tree rings. What exactly prevents trees from annually reacting to temperature, is another mystery?!! By interesting, it’s a point that drives me crazy– Haha.

Correlation is not a physical process, unfortunately this paper suffers a bit from the combination of stats and reality. No this isn’t time for ‘correlation isn’t causation’ but it is a time to consider the non-natural comparison of datasets that correlation represents. It’s rather entertaining to see people write with excitement that two unrelated positive trends have a correlation greater than 0.1. For instance, global economic output and the ever improving blogging experience at the Air Vent. Why it’s surprising is where the topic leaves me a bit dumbfounded and concerned that perhaps the authors didn’t realize the extent of the infilling – hockeystickization – of the Mann08 proxy data used in this paper.

What I mean is, when one regression setting is confirmed by the good correlation of unused data against the used data, something improved seems to be going on. However, when the data has a general upslope – substantially pasted on by RegEM – who should be surprised by a bit of correlation!!? What’s more is that the regression performed, seems to me to be another hockey-stick-seeking missile. The good news is that their hockey stick was verified or in this case shown to be unverified, using actual statistics.

The only thing which really bothered me was the complete ignoring of variance loss created by their methods as well as others. I mean they apparently completely miss the point that the methods for extracting a signal from the incredibly noisy data based on a shortened calibration period, will preferentially select autocorrelated noise. For the math enhanced reader-autocorrelated is defined here as meaning anything with a temporally persistent signal or for those of us who don’t get lost in math nuance — “noise with a trend”.

They really ignored it too:

Alternatively, the number

of proxies can be lowered through a threshold screening process (Mann

et al., 2008) whereby each proxy sequence is correlated with its closest local

temperature series and only those proxies whose correlation exceeds a

given threshold are retained for model building. This is a reasonable approach,

but, for it to offer serious protection from overfitting the temperature

sequence, it is necessary to detect ”spurious correlations”.

As far as the ‘reasonable approach’ all I can say is pure bovine scatology and hand waiving, while it is friendly hand waiving to those who are otherwise critiqued, it is still hand waving. It’s like all these guys went to the same school of chuck it if you don’t like it!! I’ll take my lousy state school education over this kind of thing any day. . All because of the concept that you might be able to detect ‘spurious’ correlations. My god that misses the point that correlations are a mathematical artifact, and not a reality detector.

– Sorting a lot of long timeseries by correlation inside a short series causes guaranteed variance loss in the rest.

– Scaling by least squares fit of long timeseries by correlation inside a short series causes guaranteed variance loss in the rest.

– Regression of long timeseries by correlation inside a short series causes guaranteed variance loss in the rest.

It’s all the same thing, and it’s still not understood. So, why all the noise Jeff? After all, the paper did completely prove that the proxy reconstructions aren’t doing their job right?

Well yeah, but look at this plot:

Now I cannot explain the continuous rise of the pre-calibration handle of the hockey stick, but these authors have done absolutely nothing I can see to address why the blade/handle relationship is guaranteed by the math. I don’t claim to have a high quality understanding of the lasso method yet though, but the variance loss which will happen is undiscussed.

Anyway, my impression of the paper is that it has a lot of appropriately critical wording inside, but it also suffers from a bad reconstruction which was not appropriately criticized within the paper. They had to start somewhere though, but it seems to me that while citing VonStorch and Sorita 04, they missed the crux of the argument. Maybe I’m wrong.

A key issue which was a positive in the paper was the quality of the signal in the proxies:

Hence, the real proxies–if they contain linear signal on temperatures–should outperform our pseudo-proxies, at least with high probability.

Which is later demonstrated in table 1 to not be the case.

In general, the pseudo-proxies are selected about as often as the true proxies. That is, the Lasso does not find that the true proxies have substantially more signal than the pseudoproxies.

Which is a conclusion that I’ve come to here over the past two years, there really isn’t much signal in these proxies. Not enough to separate fake proxies with similar autocorrelation properties and no signal from temperature proxies.

I think from my recent work here on Mann07 though, we might be able to make an engineering style estimate of the true signal contained in the Mann08 proxies. That is something I haven’t read in the literature (not that it isn’t there) but since we have climate model data with the ability to add similar autocorrelation to that found in proxies, we can make an estimate. This estimate will likely be the subject of my next post.

Final thoughts:

Good paper, good conclusions containing what is still an ugly reconstruction method. Like the MMH10 model paper, this will need to be addressed by the community rather than ignored. These statistically correct critiques of paleoclimate are becoming more common and will continue until the issue is properly addressed by the consensus community. If the proxies don’t contain enough signal to be better predictors of temperature than sophisticated noise, models which use reconstructions to verify the accuracy of hind-casts, have got nothing to be verified against!

Or as Brigg’s quotes from MW10:

Climate scientists have greatly underestimated the uncertainty of proxy-based reconstructions and hence have been overconfident in their models.



