by Judith Curry

Short summary: scientists sought political relevance and allowed policy makers to put a big thumb on the scale of the scientific assessment of the attribution of climate change.

Bernie Lewin has written an important new book:

SEARCHING FOR THE CATASTROPHE SIGNAL:The Origins of The Intergovernmental Panel on Climate Change

The importance of this book is reflected in its acknowledgements, in context of assistance and contributions from early leaders and participants in the IPCC:

This book would not have been possible without the documents obtained via Mike MacCracken and John Zillman. Their abiding interest in a true and accurate presentation of the facts prevented my research from being led astray. Many of those who participated in the events here described gave generously of their time in responding to my enquiries, they include Ben Santer, Tim Barnett, Tom Wigley, John Houghton, Fred Singer, John Mitchell, Pat Michaels . . . and many more.

You may recall a previous Climate Etc. post Consensus by Exhaustion, on Lewin’s 5 part series on Madrid 1995: The last day of climate science.

Read the whole book, it is well worth reading. The focus of my summary of the book is on Chapters 8-16 in context of the theme of ‘detection and attribution’, ‘policy cart in front of the scientific horse’ and ‘manufacturing consensus’. Annotated excerpts from the book are provided below.

The 1970’s energy crisis

In a connection that I hadn’t previously made, Lewin provides historical context for the focus on CO2 research in the 1970’s, motivated by the ‘oil crisis’ and concerns about energy security. There was an important debate surrounding whether coal or nuclear power should be the replacement for oil. From Chapter 8:

But in the struggle between nuclear and coal, the proponents of the nuclear alternative had one significant advantage, which emerged as a result of the repositioning of the vast network of government-funded R&D laboratories within the bureaucratic machine. It would be in these ‘National Laboratories’ at this time that the Carbon Dioxide Program was born. This surge of new funding meant that research into one specific human influence on climate would become a major branch of climatic research generally. Today we might pass this over for the simple reason that the ‘carbon dioxide question’ has long since come to dominate the entire field of climatic research—with the very meaning of the term ‘climate change’ contracted accordingly.

This focus was NOT driven by atmospheric scientists:

The peak of interest in climate among atmospheric scientists was an international climate conference held in Stockholm in 1974 and a publication by the ‘US Committee for GARP’ [GARP is Global Atmospheric Research Programme] the following year. The US GARP report was called ‘Understanding climate change: a program for action’, where the ‘climate change’ refers to natural climatic change, and the ‘action’ is an ambitious program of research.

[There was] a coordinated, well-funded program of research into potentially catastrophic effects before there was any particular concern within the meteorological community about these effects, and before there was any significant public or political anxiety to drive it. It began in the midst of a debate over the relative merits of coal and nuclear energy production [following the oil crisis of the 1970’s]. It was coordinated by scientists and managers with interests on the nuclear side of this debate, where funding due to energy security anxieties was channelled towards investigation of a potential problem with coal in order to win back support for the nuclear option.

The emergence of ‘global warming’

In February 1979, at the first ever World Climate Conference, meteorologists would for the first time raise a chorus of warming concern. The World Climate Conference may have drowned out the cooling alarm, but it did not exactly set the warming scare on fire.

While the leadership of UNEP (UN Environmental Programme) became bullish on the issue of global warming, the bear prevailed at the WMO (World Meteorological Organization). When UNEP’s request for climate scenario modelling duly arrived with the WCRP (World Climate Research Programme) committee, they balked at the idea: computer modelling remained too primitive and, especially at the regional level, no meaningful results could be obtained. Proceeding with the development of climate scenarios would only risk the development of misleading impact assessments.

It wasn’t long before we see scientific research on climate change becoming marginalized in the policy process, in context of the precautionary principle:

At Villach in 1985, at the beginning of the climate treaty movement, the rhetoric of the policy movement was already breaking away from its moorings in the science. Doubts raised over the wildest speculation were turned around, in a rhetoric of precautionary action: we should act anyway, just in case. With the onus of proof reversed, the research can continue while the question remains (ever so slightly) open.

Origins of the IPCC

With regards to the origins of the IPCC:

Jill JÅNager gave her view that one reason the USA came out in active support for an intergovernmental panel on climate change was that the US Department of State thought the situation was ‘getting out of hand’, with ‘loose cannons’ out ‘potentially setting the agenda’, when governments should be doing so. An intergovernmental panel, so this thinking goes, would bring the policy discussion back under the control of governments. It would also bring the science closer to the policymakers, unmediated by policy entrepreneurs. After an intergovernmental panel agreed on the science, so this thinking goes, they could proceed to a discussion of any policy implications.

While the politics were already making the science increasingly irrelevant, Bert Bolin and John Houghton brought a focus back to the science:

Within one year of the first IPCC session, its assessment process would transform from one that would produce a pamphlet sized country representatives’ report into one that would produce three large volumes written by independent scientists and experts at the end of the most complex and expensive process ever undertaken by a UN body on a single meteorological issue. The expansion of the assessment, and the shift of power back towards scientists, came about at the very same time that a tide of political enthusiasm was being successfully channelled towards investment in the UN process, with this intergovernmental panel at its core.

John Houghton (Chair of Working Group I) moved the IPCC towards a model more along the lines of an expert-driven review: he nominated one or two scientific experts—‘lead authors’—to draft individual chapters and he established a process through which these would be reviewed at lead-author meetings.

The main change was that it shifted responsibility away from government delegates and towards practising scientists. The decision to recruit assessors who were leaders in the science being assessed also opened up another problem, namely the tendency for them to cite their own current work, even where unpublished.

However, the problem of marginalization of the science wasn’t going away:

With the treaty process now run by career diplomats, and likely to be dominated by unfriendly southern political agitators, the scientists were looking at the very real prospect that their climate panel would be disbanded and replaced when the Framework Convention on Climate Change came into force.

And many scientists were skeptical:

With the realisation that there was an inexorable movement towards a treaty, there was an outpouring of scepticism from the scientific community. This chorus of concern was barely audible above the clamour of the rush to a treaty and it is now largely forgotten.

At the time, John Zillman presented a paper to a policy forum that tried to provide those engaged with the policy debate some insight into just how different was the view from inside the research community. Zillman stated that:

. . . that the greenhouse debate has now become decoupled from the scientific considerations that had triggered it; that there are many agendas but that they do not include, except peripherally, finding out whether and how climate might change as a result of enhanced greenhouse forcing and whether such changes will be good or bad for the world.

To give some measure of the frustration rife among climate researchers at the time, Zillman quoted the director of WCRP. It was Pierre Morel, he explained, who had ‘driven the international climate research effort over the past decade’. A few months before Zillman’s presentation, Morel had submitted a report to the WCRP committee in which he assessed the situation thus:

The increasing direct involvement of the United Nations. . . in the issues of global climate change, environment and development bears witness to the success of those scientists who have vied for ‘political visibility’ and ‘public recognition’ of the problems associated with the earth’s climate. The consideration of climate change has now reached the level where it is the concern of professional foreign-affairs negotiators and has therefore escaped the bounds of scientific knowledge (and uncertainty).

The negotiators, said Morel, had little use for further input from scientific agencies including the IPCC ‘and even less use for the complicated statements put forth by the scientific community’.

There was a growing gap between the politics/policies and the science:

The general feeling in the research community that the policy process had surged ahead of the science often had a different effect on those scientists engaged with the global warming issue through its expanded funding. For them, the situation was more as President Bush had intimated when promising more funding: the fact that ‘politics and opinion have outpaced the science’ brought the scientists under pressure ‘to bridge the gap’.

In fact, there was much scepticism of the modelling freely expressed in and around the Carbon Dioxide Program in these days before the climate treaty process began. Those who persisted with the search for validation got stuck on the problem of better identifying background natural variability.

The challenge of ‘detection and attribution’

Regarding Jim Hansen’s 1998 Congressional testimony:

An article in Science the following spring gives some insight into the furore. In ‘Hansen vs. the world on greenhouse threat’, the science journalist Richard Kerr explained that while ‘scientists like the attention the greenhouse effect is getting on Capitol Hill’, nonetheless they ‘shun the reputedly unscientific way their colleague James Hansen went about getting that attention’.

Clearly, the scientific opposition to any detection claims was strong in 1989 when IPCC assessment got underway.

Detection and attribution of the anthropogenic climate signal was the key issue:

During the IPCC review process (for the First Assessment Report), Wigley was asked to answer the question: When is detection likely to be achieved? He responded with an addition to the IPCC chapter that explains that we would have to wait until the half-degree of warming that had occurred already during the 20th century is repeated. Only then are we likely to determine just how much of it is human-induced. If the carbon dioxide driven warming is at the high end of the predictions, then this would be early in the 21st century, but if the warming was slow then we may not know until 2050.

The IPCC First Assessment Report didn’t help the policy makers’ ‘cause.’ In the buildup to the Rio Earth Summit:

To support the discussions of the Framework Convention at the Rio Earth Summit, it was agreed that the IPCC would provide a supplementary assessment. This ‘Rio supplement’ explains:

. . . the climate system can respond to many forcings and it remains to be proven that the greenhouse signal is sufficiently distinguishable from other signals to be detected except as a gross increase in tropospheric temperature that is so large that other explanations are not likely.

Well, this supplementary assessment didn’t help either. The scientists, under the leadership of Bolin and Houghton, are to be commended for not bowing to pressure. But the IPCC was risking marginalization in the treaty process.

In the lead up to CoP1 in Berlin, the IPCC itself was badgering the negotiating committee to keep it involved in the political process, but tensions arose when it refused to compromise its own processes to meet the political need.

However, the momentum for action in the lead up to Rio remained sufficiently strong that these difficulties with the scientific justification could be ignored.

Second Assessment Report

In context of the treaty activities, the second assessment report of the IPCC was regarded as very important for justifying implementation for the Kyoto Protocol.

In 1995, the IPCC was stuck between its science and its politics. The only way it could save itself from the real danger of political oblivion would be if its scientific diagnosis could shift in a positive direction and bring it into alignment with policy action.

The key scientific issue at the time was detection and attribution:

The writing of Chapter 8 (the chapter concerned with detection and attribution) got off to a delayed start due to the late assignment of its coordinating lead author. It was not until April that someone agreed to take on the role. This was Ben Santer, a young climate modeller at Lawrence Livermore Laboratory.

The chapter that Santer began to draft was greatly influenced by a paper principally written by Tim Barnett, but it also listed Santer as an author. It was this paper that held, in a nutshell, all the troubles for the ‘detection’ quest. It was a new attempt to get beyond the old stumbling block of ‘first detection’ research: to properly establish the ‘yardstick’ of natural climate variability. The paper describes how this project failed to do so, and fabulously so.

The detection chapter that Santer drafted for the IPCC makes many references to this study. More than anything else cited in Chapter 8, it is the spoiler of all attribution claims, whether from pattern studies, or from the analysis of the global mean. It is the principal basis for the Chapter 8 conclusion that. . .

. . .no study to date has both detected a significant climate change and positively attributed all or part of that change to anthropogenic causes.

For the second assessment, the final meeting of the 70-odd Working Group 1 lead authors . . . was set to finalise the draft Summary for Policymakers, ready for intergovernmental review. The draft Houghton had prepared for the meeting was not so sceptical on the detection science as the main text of the detection chapter drafted by Santer; indeed it contained a weak detection claim.

This detection claim appeared incongruous with the scepticism throughout the main text of the chapter and was in direct contradiction with its Concluding Summary. It represented a change of view that Santer had only arrived at recently due to a breakthrough in his own ‘fingerprinting’ investigations. These findings were so new that they were not yet published or otherwise available, and, indeed, Santer’s first opportunity to present them for broader scientific scrutiny was when Houghton asked him to give a special presentation to the meeting of lead authors.

However, the results were also challenged at this meeting: Santer’s fingerprint finding and the new detection claim were vigorously opposed by several experts in the field.

On the first day of the Madrid session of Working Group 1 in November 1995, Santer again gave an extended presentation of his new findings, this time to mostly non-expert delegates. When he finished, he explained that because of what he had found, the chapter was out of date and needed changing. After some debate John Houghton called for an ad-hoc side group to come to agreement on the detection issue in the light of these important new findings and to redraft the detection passage of the Summary for Policymakers so that it could be brought back to the full meeting for agreement. While this course of action met with general approval, it was vigorously opposed by a few delegations, especially when it became clear that Chapter 8 would require changing, and resistance to the changes went on to dominate the three-day meeting. After further debate, a final version of a ‘bottom line’ detection claim was decided:

The balance of evidence suggests a discernible human influence on global climate.

All of this triggered accusations of ‘deception’:

An opinion editorial written by Frederick Seitz ‘Major deception on “global warming” appeared in the Wall Street Journal on 12 June 1996.

This IPCC report, like all others, is held in such high regard largely because it has been peer-reviewed. That is, it has been read, discussed, modified and approved by an international body of experts. These scientists have laid their reputations on the line. But this report is not what it appears to be—it is not the version that was approved by the contributing scientists listed on the title page. In my more than 60 years as a member of the American scientific community, including service as president of both the NAS and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report. When comparing the final draft of Chapter with the version just published, he found that key statements sceptical of any human attribution finding had been changed or deleted. His examples of the deleted passages include:

‘None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases.’

‘No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [manmade] causes.’

‘Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.’

On 4 July, Nature finally published Santer’s human fingerprint paper. In Science, Richard Kerr quoted Barnett saying that he is not entirely convinced that the greenhouse signal had been detected and that there remain ‘a number of nagging questions’. Later in the year a critique striking at the heart of Santer’s detection claim would be published in reply.

The IPCC’s manufactured consensus

What we can see from all this activity by scientists in the close vicinity of the second and third IPCC assessments is the existence of a significant body of opinion that is difficult to square with the IPCC’s message that the detection of the catastrophe signal provides the scientific basis for policy action.

The scientific debate on detection and attribution was effectively quelled by the IPCC Second Assessment Report:

Criticism would continue to be summarily dismissed as the politicisation of science by vested interests, while the panel’s powerful political supporters would ensure that its role as the scientific authority in the on-going climate treaty talks was never again seriously threatened.

And of course the ‘death knell’ to scientific arguments concerned about detection was dealt by the Third Assessment Report, in which the MBH Hockey Stick analysis of Northern Hemisphere paleoclimates effectively eliminated the existence of a hemispheric medieval warm period and Little Ice Age, ‘solving’ the detection conundrum.

JC reflections

Bernie Lewin’s book provides a really important and well documented history of the context and early history of the IPCC.

I was discussing Lewin’s book with Garth Paltridge, who was involved in the IPCC during the early years, he emailed this comment:

I am a bit upset because I was in the game all through the seventies to early nineties, was at a fair number of the meetings Lewin talked about, spent a year in Geneva as one of the “staff” of the early WCRP, another year (1990) as one of the staff of the US National Program Office in the Washington DC, met most of the characters he (Lewin) talked about…… and I simply don’t remember understanding what was going on as far as the politics was concerned. How naive can one be?? Partly I suspect it was because lots of people in my era were trained(??) to deliberately ignore, and/or laugh at, all the garbage that was tied to the political shenanigans of international politics in the scientific world. Obviously the arrogance of scientists can be quite extraordinary!

Scientific scepticism about AGW was alive and well prior to 1995; took a nose-dive following publication of the Second Assessment Report, and then was was dealt what was hoped to be a fatal blow by the Third Assessment Report and the promotion of the Hockey Stick.

A rather flimsy edifice for a convincing, highly-confident attribution of recent warming to humans.

I think Bernie Lewin is correct in identifying the 1995 meeting in Madrid as the turning point. It was John Houghton who inserted the attribution claim into the draft Summary for Policy Makers, contrary to the findings in Chapter 8. Ben Santer typically gets ‘blamed’ for this, but it is clearly Houghton who wanted this and enabled this, so that he and the IPCC could maintain a seat at the big policy table involved in the Treaty.

One might forgive the IPCC leaders for dealing with new science and a very challenging political situation in 1995 during which they overplayed their hand. However, it is the 3rd Assessment Report where Houghton’s shenanigans with the Hockey Stick really reveal what was going on (including selection of recent Ph.D. recipient Michael Mann as lead author when he was not nominated by the U.S. delegation). The Hockey Stick got rid of that ‘pesky’ detection problem.

I assume that the rebuttal of the AGW ‘true believers’ to all this is that politics are messy, but look, the climate scientists were right all along, and the temperatures keep increasing. Recent research increases confidence in attribution, that we have ‘known’ for decades.

Well, increasing temperatures say nothing about the causes of climate change. Scientists are still debating the tropical upper troposphere ‘hot spot’, which was the ‘smoking gun’ identified by Santer in 1995 [link]. And there is growing evidence that natural variability on decadal to millennial time scales is much larger than previous thought (and larger than climate model simulations) [link].

I really need to do more blog posts on detection and attribution, I will do my best to carve out some time.

And finally, this whole history seems to violate the Mertonian norm of universalism:

universalism: scientific validity is independent of the sociopolitical status/personal attributes of its participants

Imagine how all this would have played out if Pierre Morel or John Zillman had been Chair of WG1, or if Tom Wigley or Tim Barnett or John Christy had been Coordinating Lead Author of Chapter 8. And what climate science would look like today.

I hope this history of manufacturing consensus gives rational people reason to pause before accepting arguments from consensus about climate change.