There’s a new paper called Selective reporting and the Social Cost of Carbon, that is being lapped up with glee by the largely unskeptical. As I understand it, the basic argument is that if one analyses the published estimates for the Social Cost of Carbon, there is an indication of a publication bias, which can then be used to estimate the unbiased Social Cost of Carbon.

When I noticed this, it rang a bell, so I went back through some things and discovered a similar paper with one common co-author. This one is called Publication bias in Measuring Anthropogenic Global Warming and it is quite remarkable, in the seriously, someone’s actually done this? kind of way. When I first saw this, I decided not to discuss it, but thought I might now, as an illustration of what this newer paper has probably done.

The basic argument is related to regression toward the mean . If your initial sample is small, the result could be a long way from the “true” mean, but with a large uncertainty, and could be either larger than, or smaller than, the “true” mean. As you increase the sample size, the difference should get smaller (but with results that are both larger than and smaller than the mean) and the uncertainty should reduce. The larger the sample, the closer the result should be to the “true” mean, and it should become more and more precise. If, however, there is some kind of publication bias (for example, negative results don’t get published) then you would see the results becoming more precise from one side only, as illustrated by the figure on the right.

What they do in this study is to apply the same argument to estimates of climate sensitivity. What they find – as shown in the figure to the left – is that there is a tendency for the more precise estimates to have a lower climate sensitivity. They therefore conclude that there is a bias, saying:

They then analyse this and conclude that the unbiased climate sensitivity is somewhere between 1.4oC and 2.3oC, despite the published estimates having a mean of 3.3oC. What they, of course, fail to realise is that the reason the left hand side is missing is not indicative of a publication bias; it’s because it is very difficult to develop a physically plausible argument as to why climate sensitivity should be this low. That the lower published estimate tends to be more precise is largely irrelevant. This is not simply a sampling issue.

So, quite a remarkable idea. Analyse the published results to show that there is some kind of bias in the published estimates, and then use this to present what is meant to be some kind of unbiased estimate. Now, of course, I haven’t gone through their Social Cost of Carbon paper, but if the Anthropogenic Global Warming one is anything to go by, I won’t be taking it too seriously. I really don’t think the scientific method includes a section that says use completely non-existent publications as part of your estimate. I would argue that in any sensible scenario we should base our understanding of these topics on what is actually published, not on what is neither published nor – as far as we’re aware – actually in existence.