Like most of us, I’ve been a bit taken aback by the ritual seppuku of young academic Wolfgang Wagner, formerly editor of Remote Sensing, for the temerity of casting a shadow across the path of climate capo Kevin Trenberth. It appears that Wagner’s self-immolation has only partly appeased Trenberth, who, like an Oriental despot, remains unamused.

Spencer and Braswell 2011, the stone presently in Trenberth’s shoe, is, to a very considerable extent, a critique of Dessler 2010 (Science). Over the past few days, I requested data from the authors of both articles and was promptly supplied with it by both. (I remind readers that Dessler, almost uniquely in the climate community, agreed with my request that IPCC AR4 Review Comments be placed online, rather than IPCC’s original plan to place one paper copy at Harvard Library).

Dessler 2010 argued (against predecessor Spencer and Braswell 2010) that there was a positive cloud feedback as follows:

The cloud feedback is conventionally defined as the change in ∆R_cloud per unit of change in ∆T_s. Figure 2A is a scatter plot of monthly values of ∆R_cloud versus ∆T_s, calculated using ECMWF interim meteorological fields. The slope of this scatter plot is the strength of the cloud feedback, and it is estimated by a traditional least-squares fit to be 0.54 +- 0.72 (2σ) W/m2/K (the slope using the MERRA is 0.46 +- 0.75 W/m2/K). Because I have defined downward flux as positive, the positive slope here means that, as the surface warms, clouds trap additional energy; in other words, the cloud feedback here is positive.

Dessler 2010 Figure 2A is shown below, with a markup showing the plot of data sent yesterday by Dessler (to show apples-and-apples):



Figure 1. Dessler 2010 Figure 2A with overplot in red. Original Caption: Fig. 2. (A) Scatter plot of monthly average values of ∆R_cloud versus ∆T_s using CERES and ECMWF interim data.

I placed the Dessler data online and re-did the regression reported in the Science article (The peer reviewers at Science did not require Dessler to show the usual diagnostics for any regression.) Readers interested in handling the data for themselves can do so as follows. (Spencer data also shown.) [Update – Nick Stokes observes that, in the later discussion of Dessler 2010, Dessler observed that the correlation between DR_cloud and DT_s is weak (r2 = 2%), meaning that factors other than Ts are important in regulating DR_cloud,” a point that I missed in writing this post. In my opinion, statistical diagnostics should be reported with the regression, rather than passim in a later discussion, but the r2 was reported. The adjusted r2, a preferable diagnostic, was .01, as I previously observed.

dess=read.csv(“http://www.climateaudit.info/data/dessler/dessler_2010.csv”) #collated from data sent Sep 6, 2011

fm=lm(eradr~erats,dess)

summary(fm) spencer=read.csv(“http://www.climateaudit.info/data/spencer/flux.csv”)

I replicated the slope reported in the article. However, the diagnostic statistics were not imposing. The adjusted r^2 was a Mannian 0.01045. With this poor a fit, the “confidence intervals” reported in the article and illustrated in Dessler 2010 Figure 2010 are not ones that would comfort an independent statistical reviewer – not that Science requires independent statistical review for statistical calculations by climate scientists, despite Wegman’s sensible recommendations on this matter a number of years ago.

The CERES all-sky series used in Spencer and Braswell 2011 matches the corresponding CERES all-sky series in Dessler 2010 (with a few more months.) The clear-sky versions differ – something that I’m presently trying to clarify. {Note – Troy draws attention to his (excellent) analysis at Lucia’s here.]

The scatter plot in Dessler 2010 is based on an “instantaneous” relationship between CRF (as defined by both parties) and temperature. Spencer and Braswell 2010 observe that there is lead-lag relationship between CRF and temperature, with a stronger correlation with a lag of 4 months than the instantaneous correlation, illustrating this in their Figure 2 as follows.



Figure 2. Spencer and Braswell 2011 Figure 3.

Dessler 2011 Figure 2 (in press) substantially replicates Spencer and Brawell 2010 Figure 3, as shown below. The blue series shown by Dessler as a sort of outlier to the three red temperature series is the widely used HadCRUT3 series (which Spencer and Braswell 2011 had used). Dessler 2011 suggests that Spencer and Braswell’s use of HadCRUT3 was done to emphasize the differences between observations and models. This seems a two-edged sword, since one might equally argue that Dessler 2010’s omission of HadCRUT3 was done for no more worthy reason. In any event, given the wide usage of HadCRUT3, not least by IPCC, it doesn’t seem to me that SB can be strongly criticized for using HadCRUT3. Dessler’s diagram slightly understates the actual coefficients of Spencer and Braswell (shown in cyan).



Figure 3. markup of Dessler 2011 Figure 2 showing Spencer and Braswell 2011 values in cyan. Original Caption:” Slope of the relation between TOA net flux and ΔTs, in W/m2/K as a function of lag between the data sets (negative lags mean that the flux time series leads ΔTs). The colored lines are from observations (covering 3/2000-2/2010 using the same TOA flux data, but different time series for ΔTs); the shading represents the 2σ uncertainty of two of the data sets. The black lines are from 13 fully coupled pre-industrial control runs; lines with the crosses ‘+’ are models used by SB11. Following SB11, all data are 1-2-1 filtered. See the text for more details about the plot.”



Dessler also observes that Spencer and Braswell 2011 showed a comparison with the three “most sensitive” and three “least sensitive” models (based on an earlier article by Forster.) Dessler observes that the discrepancy is less for several other models that did not meet these criteria, singling out GFDL CM 2.1, MPI ECHAM5 and MRI CGCM 2.3.2A as performing better according to his metric. Dessler observes that “this suggests that the ability to reproduce ENSO is what’s being tested here, not anything directly related to equilibrium climate sensitivity.” This might well be true and seems like a worthwhile comment. I’m not familiar enough with the data sets to opine on the matter.

It does seem to me that it’s been an awful lot easier for Dessler to publish this comment than it is to publish criticisms of Team articles. As CA readers are aware, important results of Santer et al 2008 did not hold up with updated data, but Team reviewers refused to permit publication. CA readers are also well aware of Steig’s concerted efforts to block publication of O’Donnell et al 2010 (which appeared only because of Ryan O’Donnell’s remarkable persistence.)

In the course of looking at the data, I noticed something interesting about the analysis of Dessler 2010 purporting to show a positive feedback.

Whatever view one might take on the differences between observations and models in the above data, the lagged relationship is more significant than the instantaneous relationship – a point shown in both the figures in Spencer and Braswell 2011 and Dessler 2011. This suggests that the original scatter plot in Dessler 2010 should be re-done using a lag of 4 months. I used the common HadCRUT3 data for the comparison – Dessler had observed that this accentuated the difference between models and observations, but it is nonetheless widely used and, if Dessler takes exception to SB’s failure to illustrate re-analysis temperature versions, one might make the same observation about the HadCRU3 omission in Dessler 2010. The results are shown below.

Doing the same regression with 4-month lagged relationships (which both Dessler and SB agree to be more significant than the instantaneous relationship), the sign of the slope is reversed. Whereas Dessler 2010 had reported a slope of 0.54 +- 0.72 (2σ) W/m2/K, the regression with lagged variables is -0.90 +- 0.95 w/m2/K and has better diagnostics. [Update Sep 8 – Nick Stokes observes that this reversal of sign may be a phase phenomenon. This is something that needs to be examined as I haven’t handled this data before. However, please note that a sign reversal also results on alternative grounds merely from using CERES clear sky data instead of ERA clear sky data, the latter being used in Dessler 2010 without an explanation for the variation. See here.)



Figure 4. Restatement of Dessler 2010 Figure 2 with 4-month lag.

Given that the even the lagged relationship is weak, I’m reluctant to say that analysis using the methods of Dessler 2010 established a negative feedback, but it does seem to me that they cannot be said to have established the claimed positive feedback.

Perhaps the editor of Science will send a written apology to Kevin Trenberth.



