Another error in the influential reports from the Intergovernmental Panel on Climate Change (IPCC) reports has been identified. This one concerns the rate of expansion of sea ice around Antarctica.

While not an issue for estimates of future sea level rise (sea ice is floating ice which does not influence sea level), a significant expansion of Antarctic sea ice runs counter to climate model projections. As the errors in the climate change “assessment” reports from the IPCC mount, its aura of scientific authority erodes, and with it, the justification for using their findings to underpin national and international efforts to regulate greenhouse gases.

Some climate scientists have distanced themselves from the IPCC Working Group II’s (WGII’s) Fourth Assessment Report (AR4), Impacts, Adaptation, and Vulnerability, prefering instead the stronger hard science in the Working Group I (WGI) Report—The Physical Science Basis. Some folks have even gone as far as saying that no errors have been found in the WGI Report and the process in creating it was exemplary.

Such folks are in denial.

As I document below, WGI did a poor job in regard to Antarctic sea ice trends. Somehow, the IPCC specialists assessed away a plethora of evidence showing that the sea ice around Antarctica has been significantly increasing—a behavior that runs counter to climate model projections of sea ice declines—and instead documented only a slight, statistically insignificant rise.

How did this happen? The evidence suggests that IPCC authors were either being territorial in defending and promoting their own work in lieu of other equally legitimate (and ultimately more correct) findings, were being guided by IPCC brass to produce a specific IPCC point-of-view, or both.

The handling of Antarctic sea ice is, unfortunately, not an isolated incident in the IPCC reports, but is simply one of many examples in which portions of the peer-reviewed scientific literature were cast aside, or ignored, so that a particular point of view—the preconceived IPCC point of view—could be either maintained or forwarded.

Background

The problems with the IPCC’s handling of the trends in Antarctic sea ice was first uncovered and presented a week or two ago in an article posted over at the World Climate Report—another blog with which I have been involved with for a long time.

In this MasterResource article, I have dug a bit deeper into what lies behind the IPCC’s “assessment” of the trends in Antarctic sea ice that is presented in its WGI Fourth Assessment Report. What I’ve uncovered clearly illustrates the difference between a “review” of the literature and an “assessment” of the literature. The former would include as much of the literature on the topic under consideration as possible, while the latter carefully selects from the literature to make a particular case. As such, the results of a “review” would be pretty constant across different assemblages of folks doing the reviewing, while the results of an “assessment” strongly depend on just who is doing the assessing. Case in point, compare the IPCC’s Fourth Assessment Report with the equally glossy and thick Assessment Report from the Nongovernmental International Panel on Climate Change (NIPCC). Both start with the same body of literature, and yet they arrive at completely different conclusions.

Getting into the Detail

This is clearly evident in the section on recent sea ice trends in Antarctica. The IPCC dedicates part of one paragraph to the topic (IPCC AR4 Chapter 4, Section 4.4.2.2, p. 350-351) incorporating one reference (“Comiso (2003)”) which turns out to be a book chapter (i.e. not part of the peer-reviewed literature). The NIPCC on the other hand dedicates two full pages to the subject and incorporates 14 citations from the peer-reviewed literature (NIPCC, Chapter 4, Section 4.2.1, p 152–154).

The IPCC concludes, after its brief analysis, that while there has been an apparent increase in the sea ice extent around Antarctica from 1979 through 2005, that the increase has been slight and not statistically significant.

The NIPCC, on the other hand, finds that the trend in Antarctic sea ice has been about 2 to 3 times as great as the IPCC reported and, in fact, is quite statistically significant.

True, the NIPCC Report was published after the IPCC Assessment, so it includes a few citations that were published in the literature subsequent to the IPCC inclusion deadline, but still, there were plenty of publications that were extant at the time of the IPCC preparation that should have better guided the IPCC finding.

For some reason, the IPCC opted to ignore the vast majority those papers (and associated datasets). Consequently, as we shall see, the NIPCC’s assessment turns out to be superior to the IPCC’s.

The fact that the IPCC’s assessment was extremely limited and narrow did not go unnoticed in the IPCC review process (the set of expert and government reviews for various drafts of the IPCC AR4 can be found here).

One commenter (Ola Johannessen, who himself has published on sea ice trends) complained about the First Order Draft of the AR4 Chapter 4 that:

Section 4.4.2.2 [the section on sea ice trends]: The presentation of hemispheric, regional and seasonal trends is also incomplete, misleading and biased to NASA work (Comiso).

To which the IPCC replied:

Taken into account in the revised text.

A look at the final, published version of Section 4.4.2.2 shows that, in fact, there is only one reference to data on sea ice trends, that to Comiso (2003). The only other reference in the section (Belchansky et al., 2005) is to explain reasons for interannual variability and the Belchansky et al. reference was already present in the First Order Draft. So it hardly seems like Johannessen’s comments were “taken into account,” instead it seems like they were ignored.

Johannessen further complains about IPCC’s use of only one sea ice dataset, commenting “There should be a sentence added before ‘An updated version of the analysis done by Comiso…’ (which, by the way, appears to an update that is not a published or accepted paper)” and then going on to suggest many additional references that should be added to this section. The IPCC responds:

“Taken into account with the inclusion of Johannessen et al. (2004) work. But AR4 is meant to be the most recent assessment, not a history of prior assessments. Updates of data sets using previously published methodology is acceptable for IPCC.”

Interesting. First, in the published version of Chapter 4, the reference to Johannessen et al. (2004) does not appear in the section of the Chapter 4 under discussion—so apparently the IPCC was just bluffing about including it. Second, the IPCC affirms that being an “assessment” doesn’t mean having to include all relevant literature (i.e., it includes only literature that it deems to be relevant). And third, that using updates of previously published datasets is acceptable in the IPCC process.

In this case, points 2 and 3 seem opposed to each other. For the IPCC has deemed one particular dataset, that of Comiso (2003), to be most relevant, despite the fact that several other “recent” datasets existed.

Let’s look into the IPCC’s reliance on Comiso (2003) a bit further (Comiso by the way was a contributing author to IPCC AR4 Chapter 4).

The IPCC reference of their sea ice data is to Comiso (2003) which is the following book chapter (available here):

Comiso, J.C., 2003: Large scale characteristics and variability of the global sea ice cover. In: Sea Ice – An Introduction to its Physics, Biology, Chemistry, and Geology [Thomas, D. and G.S. Dieckmann (eds.)]. Blackwell Science, Oxford, UK, pp. 112–142.

The analyses in this book chapter use a sea ice algorithm developed and improved by Dr. Josefino Comiso during the 1980s and 1990s (Comiso’s technique was known as the “Bootstrap” algorithm). At the same time, there was another algorithm to derive sea ice from satellite observations that had been developed and improved by Dr. Donald Cavalieri and colleagues during the same span (Cavalieri et al.’s technique was known as the “NASA Team” algorithm). Both algorithms produced pretty similar results when deriving sea ice extent in the Arctic, but in the Antarctic regions, the results—especially the trend results—differed rather significantly. This difference was well-recognized in the peer-reviewed scientific literature (e.g. Zwally et al., 2002; Comiso and Steffen, 2001). Comiso’s Bootstrap method produced a much smaller and insignificant increase in Antarctic sea ice while the NASA Team algorithm produced a larger, statistically significant increase. Another analysis, by Watkins and Simmonds (2000) produced a trend that agreed better with the NASA Team results than Comiso’s Bootstrap results.

All these facts were acknowledged by the researchers involved as evidenced by discussions in Zwally et al. (2002) and Comiso and Steffen (2001), with each group more or less showing more reliance on their own methodology.

Further, an update to the NASA Team algorithm (known as NASA Team version 2) was published by Markus and Cavalieri in 2000. This update had the impact of producing an even greater trend in the extent of Antarctic sea ice (over the original NASA Team algorithm) and enlarging the discrepancy with Comiso’s Bootstrap algorithm.

All of this was extant in the peer-reviewed literature at the time of the IPCC AR4 production and yet the IPCC “assessed” things this way:

Most analyses of variability and trend in ice extent using the satellite record have focused on the period after 1978 when the satellite sensors have been relatively constant. Different estimates, obtained using different retrieval algorithms, produce very similar results for hemispheric extent, and all show an asymmetry between changes in the Arctic and the Antarctic. As an example, an updated analysis done by Comiso (2003) spanning the period November 1987 through December 2005, is shown in Figure 4.8. [emphasis added]

The statement in bold above was well-established at the time to be wrong, at least as it applied to the Southern Hemisphere. So obviously, even at this point, it is clear that the IPCC had conducted an inaccurate “assessment” of the literature.

Not surprisingly, of the existing “retrieval algorithms,” the one which showed the smallest (and statistically insignificant) trend in the Southern Hemisphere was the one used in “Comiso (2003)” which was selected as the example used by the IPCC.

How convenient.

It is even somewhat debatable whether even the “updated analysis done by Comiso (2003)” showed an insignificant trend.

The First Order Draft of Chapter 4 contained the following illustration of Southern Hemisphere sea ice, along with the caption “Sea Ice extent anomalies … the Southern Hemisphere based on passive microwave satellite data… [l]inear trend lines are indicated for each hemisphere….the small positive trend in the Southern Hemisphere is not significant. (Updated from Comiso, 2003).”



Figure 1. Figure 4.4.1b from the IPCC AR4 Chapter 4 First Order Draft.

Notice two things, 1) the figure depicts monthly ice extent anomalies from November 1978 through October 2004, and 2) the trend through them seems to be statistically significant (i.e. the confidence range does not include zero), given in the illustration as 9089.2 +/- 2970.7 km2/year or 0.735 +/- 0.240%/dec.

Yet, for some reason, the accompanying text claims that the trend in Figure 4.4.1b is insignificant (AR4 First Order Draft, page 4-14, lines 9-10):

The Antarctic results show a slight but insignificant positive trend of 0.7 ± 0.2% per decade.

This inconsistency was brought to the IPCC Chapter 4 authors’ attention by several IPCC commenters. Commentor John Church wrote “I do not understand why this trend is insignificant – it is more than three times the quoted error estimates” and Stefan Rahmstorf wrote “How can a trend of 0.7 +/- 0.2 be ‘insignificant’? Is not 0.2 the confidence interval, so it is significantly positive?” The IPCC responded to both in the same manner “Taken into account in revised text.”

And boy did they ever!

The Second Order Draft of Chapter 4 included the following figure (which ultimately was the one included in the final publication):



Figure 2. Figure 4.4.1b from the IPCC AR4 Chapter 4 Second Order Draft (this graphic was Figure 4.8 in the IPCC AR4 published version of Chapter 4).

The caption still read “the small positive trend in the Southern Hemisphere is not significant” but now the trend had become “5.6 +/- 11 x 103 km2 per year.”

Note two things 1) the monthly sea ice anomalies were replaced by annual anomalies, and 2) the trend shrunk by 38% and now actually was statistically insignificant.

So how did this come about?

First off, the IPCC used a well known statistical trick to lower the significance of the increase—that is, switch from monthly values to annual values. This trick generally has little impact on the trend value, but can have a sizeable impact on the statistical confidence of the trend. A trend that is supported by a larger amount of individual data points (in this case, monthly values) has more statistical confidence than the same trend supported by fewer data point (in this case annual data values). So, by using annual data instead of monthly data, the IPCC effectively lowered the perceived confidence of the Southern Hemisphere sea ice trend.

Secondly, just how did the trend drop by 38% when adding another 13 months’ worth of observations? Well, it is not because of the influence of those extra months, as they fell very near the established trend line (in other words, they had little direct influence on the trend). And switching from monthly to annual data also wouldn’t do it. So it had to be something else. One possible explanation is that the figure from the First Order Draft was actually (mistakenly) depicting Comiso’s determination of the area of sea ice, rather than the extent of sea ice. There is a difference in definition between these terms. The sea ice extent is taken to mean the area which is covered by sea ice with a concentration of at least 15% (i.e. this includes regions that are 85% open water), while sea ice area is taken to be the actual area of sea ice itself (so the sea ice area is less than the sea ice extent). Under general circumstances, the two measurements are highly correlated, and their trends are very similar. However, in the case of the Comiso’s Bootstrap algorithm, the trends in Southern Hemisphere sea ice extent and sea ice area were largely different. This fact raised some flags of concern at the time. Zwally et al. (2002) noted that the Bootstrap sea ice extent trends were the odd man out of all datasets. The area trends were similar across retrieval algorithms (all were significantly positive, including Comiso’s Bootstrap) and the extent trends were similar to the area trends in all algorithms except Comiso’s Bootstrap algorithm. Zwally et al. (2002) took this to mean that something was likely wrong with the Bootstrap determinations of Southern Hemispheric sea ice extent (probably involving how data from two satellites was stitched together). Comiso and Steffen (2001) also noted the difference in the trend between sea ice area and extent produced by the Bootstrap algorithm, they attributed most of the difference to changes in how tightly the sea ice was packed together (a mechanism dismissed by Zwally et al.) but admitted that inter-satellite issues may also play a part in the trend differences.

Bottom line is that not only did the IPCC Chapter 4 authors have to carefully choose which sea ice retrieval algorithm to use, but they also had to be careful to use sea ice extent rather than sea ice area (somthing they quite possibly forgot that they needed to do in the First Order Draft of Chapter 4).

The IPCC justifies these decisions as being the result of their “assessment” of the topic and the literature—decisions that just so happen to minimize the apparent increase in Southern Hemispheric sea ice concentration. The result of the inclusion of any other then-extant dataset on Southern Hemispheric sea ice would have been to counter the IPCC’s “assessment” that the sea ice increase there was statistically insignificant.

Oh yeah, an “assessment” of a significant rise in Southern Hemispheric sea ice would have been quite inconvenient to another IPCC “assessment” that “[s]ea ice is projected to shrink in both the Arctic and the Antarctic under all SRES scenarios.” So, no doubt the IPCC Chapter 4 Coordinating Lead Authors got a big slap on the back from the IPCC brass for avoiding that potentially embarrassing problem.

Parenthetically, I bet it would be fun to see the emails associated with the production of AR4 Chapter 4!

One last thing.

Less than a year after the IPCC AR4 was published, Comiso reported that indeed there was a problem with the Bootstrap algorithm as it concerned Southern Hemispheric sea ice extent (Comiso and Nishio, 2008). Correcting that problem increased the observed trend in Antarctic sea ice extent from November 1978 to December 2005 to 14,645 km2/year—a highly statistically significant value that is 2.6 times higher than reported by the IPCC and virtually identical to the trend from the updated NASA Team algorithm, described by Markus and Cavalieri in 2000 but completely ignored by the IPCC. Basically, everyone but Comiso (and the IPCC) was right all along.



Figure 3. Annual Antarctic sea ice anomalies from three datasets: the one used by the IPCC (Comiso, 2003; red); another extant at the time of the IPCC production (Markus and Cavalieri, 2000; blue); and the update to the IPCC analysis (Comiso and Nishio, 2008; cyan). The trend in the latter two datsets are more than 2.5 times larger than the IPCC trend and both are statistically significant (the IPCC trend is not).

Conclusion

On the topic of Antarctic sea ice trends, the “consensus of scientists”—as the IPCC likes to call itself—was wrong, led astray by the extremely poor “assessment” of the scientific knowledge-base made by a very few people who were directly involved in preparing that section—people who were either being territorial in defending and promoting their own work, were being guided by higher-ups to produce a specific IPCC point-of-view, or both.

From all I have been able to find out about this so far (including enlightenment gained from the Climategate emails into how other sections of the AR4 were carefully constructed), I would rate it “extremely unlikely” (in IPCC parlance, less than 5% chance) that what transpired was dumb luck, born of the IPCC authors’ unfamiliarity with the peer-reviewed literature—the very thing they were supposed to be assessing.

I am not sure which case is the most embarrassing.

References:

Comiso, J.C., 2001. Studies of Antarctic sea ice concentrations from satellite data and their applications. Journal of Geophysical Research, 106, C12, 31361-31385.

Comiso, J. C., and F. Nishio, 2008. Trends in the sea ice cover using enhanced and compatible AMSR-E, SSM/I, and SMMR data. Journal of Geophysical Research, 113, C02S07, doi:10.1029/2007JC004257.

Cavalieri, D. J., P. Gloersen, C. L. Parkinson, J. C. Comiso, and H. J. Zwally, 1997. Observed hemispheric asymmetry in global sea ice changes. Science, 278, 1104–1106.

Cavalieri, D. J., C. L. Parkinson, P. Gloersen, J. C. Comiso, and H. J. Zwally, 1999. Deriving long-term time series of sea ice cover from satellite passive microwave multisensor data sets. Journal of Geophysical Research, 104, 15803–15814.

Markus, T., and D. Cavalieri, 2000. An enhancement of the NASA Team sea ice algorithm. IEEE Transactions on Geoscience and Remote Sensing, 38, 1387-1398.

Watkins, A. B., and I. Simmonds, Current trends in Antarctic sea ice: The 1990s impact on a short climatology, 2000. Journal of Climate, 13, 4441–4451.

Zwally, H.J., J. C. Comiso, C. L. Parkinson, D. J. Cavalieri, 2002. Variability of Antarctic sea ice 1979-1998. Journal of Geophysical Research, 107, C5, 3041, doi:10.1029/2000JC000733.