Chip Knappenberg has published Lindzen’s review correspondence with PNAS at Rob Bradley’s blog here. Most CA readers will be interested in this and I urge you to read the post, taking care to consult the attachments. (I would have preferred that the post include some excerpts from the attachments.)

The post focuses to a considerable extent on PNAS’ departures from their review policy, but there are some other interesting features in the correspondence, which I’ll discuss here, referring readers to the original post for the PNAS issues.

A PNAS letter to its members observes:

very few Communicated and Contributed papers are rejected by the Board. Last year approximately 800 Communicated and 800 Contributed papers were submitted, of which only 32 Communicated and 15 Contributed papers were rejected. These numbers are not exceptional by historical standards extending at least the past 15 years

The rejection of Lindzen’s paper is an unusual event. NAS members submitting a paper are asked to provide two reviews (they are permitted to select their own reviewers.) NAS policy on referees says:

we have adopted the NSF policy concerning conflict of interest for referees (http://www.pnas.org/site/misc/coi.shtml), which states that individuals who have collaborated and published with the author in the preceding four years should not be selected as referees.

Both Happer and Chou, according to Lindzen, met this criterion. (One of the overlooked implications in Wegman’s analysis is that the extensive collaboration between paleoclimate authors makes it that much harder to find referees that meet NSF standards – it’s too bad that they didn’t add this criterion in the analysis.)

PNAS rejected the referees as follows:

Both scientists are formally eligible for refereeing according to the PNAS rules, but one of them (WH) is certainly not an expert for the topic in question and the other one (MDC) has published extensively on the very subject together with Lindzen. So, in a sense, he is reviewing his own work… it is good scientific practice to involve either some of those who have raised the counter-arguments (and may be convinced by an improved analysis) in the review or to solicit at least the assessment of leading experts that have no direct or indirect affiliation with the authors.

Instead of their normal cozy practices, PNAS reverted by suggesting that the submission be reviewed by “Susan Solomon, Kevin Trenberth, Gavin Schmidt, James G. Anderson and Veerabhadran Ramanathan”, saying that “the Board will seek the comments of at least one of these reviewers unless you have any specific objections to our contacting these experts”. Lindzen disputed PNAS’ characterization of Happer and Chou as not factual. In the end, PNAS obtained four reviews, two of which were respectful,recommending reworking, and two of which were acrimonious. Lindzen surmised that PNAS, contrary to its standard practices, had retained reviewers to whom he had objected.

Some of the comments in the reviews – see here – are intriguing. For example, Reviewer 2 stated:

The poor state of cloud modeling in GCMs has been amply demonstrated elsewhere and the effect of this on climate sensitivity is well documented and acknowledged.

While cloud uncertainties are mentioned in IPCC AR4, I would not say that the effect of various cloud parameterization on climate sensitivity is ‘well documented” in IPCC. Quite the opposite. IPCC’s description of clouds is, in my opinion, far too cursory given the importance of the problem.

The reviewer continues with the following list of problems with theory in the area:

While the stated result is dramatic, and a remarkable departure from what analysis of data and theory has so far shown, I am very concerned that further analysis will show that the result is an artifact of the data or analysis procedure. The result comes out of a multi-step statistical process. We don’t really know what kind of phenomena are driving the SST and radiation budget changes, and what fraction of the total variance these changes express, since the data are heavily conditioned prior to analysis. We don’t know the direction of causality – whether dynamically or stochastically driven cloud changes are forcing SST, or whether the clouds are responding to SST change. Analysis of the procedure suggests the former is true, which would make the use of the correlations to infer sensitivity demonstrably wrong, and could also explain why such a large sensitivity of OLR to SST is obtained when these methods are applied.

Let’s stipulate that all of this is true. Shouldn’t this then be stated prominently in IPCC? The IPCC SPM says “Cloud feedbacks remain the largest source of uncertainty” but this hardly does justice to the long list of problems that worry reviewer 2.

And doesn’t reviewer 2 prove too much here? If all of these problems need to be solved prior to publishing an article in the field, wouldn’t this apply to all articles? Not just ones the implications of which are low sensitivity.

Reviewer 2 complains that methodological details are inadequate:

Sufficient description is necessary so that another experimenter could reproduce the analysis exactly. I don’t think I could reproduce the analysis based on the description given. For example, exactly how were the intervals chosen? Was there any subjectivity introduced?

Look, I’m highly supportive of this type of criticism. Lindzen disputes the criticsm. But it is hardly standard practice in climate science to provide adequate methodology, let alone data. I’ve unsuccessful sought assistance from journals in getting data. I’m all in favor of replication and hope that this precedent extends to the Team as well. Several years ago, I asked PNAS to require Lonnie Thompson to provide a detailed archive of Dunde and other data so that inconsistent versions could be reconciled. PNAS refused.

The more sympathetic reviewers wanted to understand why Lindzen’s results differed from Trenberth’s and asked for a reconciliation:

I feel that the major problem with the present paper is that it does not provide a sufficiently clear and systematic response to the criticisms voiced following the publication of the earlier paper by the same authors in GRL, which led to three detailed papers critiquing those findings.

and

If the paper were properly revised, it would meet the top 10% category. 2) The climate feedback parameter is of general interest. 3) I answered no, because the exact same data have been used by others to get an opposing answer and I do not see any discussion or evidence as to why one is correct and the other is not.

That point seems reasonable enough to me. However, when I asked that IPCC provide similar reconciliation of Polar Urals versus Yamal, Briffa said that it would be “inappropriate” to do so, and that was that.

While Lindzen could have accommodated the last two reviewers, he decided that it would be impossible to accommodate the first two reviewers and he submitted elsewhere.

Compare these reviews to Jones’ puffball reviews, which were some of the most important Climategate documents. Prior to Climategate, people may have suspected that close collaborators were reviewing one another’s work (as Wegman had hypothesized), but no one knew for sure. People may have suspected that pals gave one another soft reviews, but no one knew for sure. Jones’ reviews of submissions by Mann, by Schmidt, by Santer were proof.



