Summary: Dr. Judith Curry, eminent climate scientist, makes some devastating observations about the new National Climate Assessment. She finds some low professional standards in climate science. This would be remarkable in a purely academic science. It might have disastrous effects in climate science.

“It makes me feel frustrated because, again, you can say I don’t believe in gravity. But if you step off the cliff, you are going down. So we can say I don’t believe climate is changing, but it is based on science. It’s over 150 years old.” – Katherine Hayhoe speaking on CNN’s “New Day”, 27 November. She has a PhD in atmospheric science, is a professor of political science at Texas Tech U, and was a co-author of the 4th National Climate Assessment Report.

For thirty years, activists have framed the debate as if the it is binary: believe in climate change – or not. This is quite mad. The appropriate public policy debate depends on forecasts. The IPCC’s reports clearly describe the massive uncertainties in these forecasts. Equating these forecasts to those of gravity (F = Gm 1 m 2 /r2) is a lie – propaganda for the ignorant. Time has shown that a large majority of the American public is too smart to fall for this. But they keep trying.

“Insanity is repeating the same mistakes and expecting different results.”

– The basic text of Narcotics Anonymous. They know all about dysfunctionality.

A more sophisticated attempt to scare us

The National Climate Assessment report has received full-bore media coverage, broadcasting its confident and terrifying predictions. Judy Curry is reviewing it, and found some oddities. Here are excerpts from the first of her posts about this important project.

Excerpt from “National Climate Assessment: A crisis of epistemic overconfidence.”

By Judith Curry at Climate Etc.

…Should we have the same confidence in the findings of the recently published America’s 4th National Climate Assessment (NCA4) as we do in gravity? How convincing is the NCA4? It is published in two volumes.

Vol I: Climate Science Special Report.

Vol II: Impacts, Risks, and Adaptation in the United States.

The NCA4 guides the decision making of the U.S. federal government, local and state governments, and businesses. So it is important to point out the problems in the NCA4 Reports and the assessment process. …This first post addresses the issue of overconfidence in the NCA4. I have previously argued that overconfidence is a problem with the IPCC’s reports (see examples from my post “Overconfidence?”) … {but} the overconfidence problem with the NCA4 is much worse.

Example: overconfidence in NCA4.

To illustrate the overconfidence problem with the NCA4 Report, consider the following Key Conclusion from Chapter 1: “Our Globally Changing Climate.”

“Longer-term climate records over past centuries and millennia indicate that average temperatures in recent decades over much of the world have been much higher, and have risen faster during this time period, than at any time in the past 1,700 years or more, the time period for which the global distribution of surface temperatures can be reconstructed. (High confidence)”

This statement really struck me, since it is at odds with the conclusion from the IPCC AR5 WG1 Chapter 5: “Information from Paleoclimate Archives.”

“For average annual NH temperatures, the period 1983–2012 was very likely the warmest 30-year period of the last 800 years (high confidence) and likely the warmest 30-year period of the last 1400 years (medium confidence).”

While my knowledge of paleoclimate is relatively limited, I don’t find the AR5 conclusion to be unreasonable, but it seems rather overconfident with the conclusion regarding the last 1400 years. The NCA4 conclusion, is stronger than the AR5 conclusion and with greater confidence. It made me wonder whether there was new research that I was unaware of, and if the authors included young scientists with a new perspective. Fortunately, the NCA includes a section at the end of each Chapter that provides a traceability analysis for each of the key conclusions.

“Traceable Accounts for each Key Finding: 1) document the process and rationale the authors used in reaching the conclusions in their Key Finding, 2) provide additional information to readers about the quality of the information used, 3) allow traceability to resources and data, and 4) describe the level of likelihood and confidence in the Key Finding. Thus, the Traceable Accounts represent a synthesis of the chapter author team’s judgment of the validity of findings, as determined through evaluation of evidence and agreement in the scientific literature.”

Here is text from the traceability account for the paleoclimate conclusion.

“Description of evidence base. The Key Finding and supporting text summarizes extensive evidence documented in the climate science literature and are similar to statements made in previous national (NCA3) and international assessments. There are many recent studies of the paleoclimate leading to this conclusion including those cited in the report (e.g., Mann et al. 2008; PAGES 2k Consortium 2013). {See an open version of PAGES 2008.}

“Major uncertainties: Despite the extensive increase in knowledge in the last few decades, there are still many uncertainties in understanding the hemispheric and global changes in climate over Earth’s history, including that of the last few millennia. Additional research efforts in this direction can help reduce those uncertainties.”

“Assessment of confidence based on evidence and agreement, including short description of nature of evidence and level of agreement : There is high confidence for current temperatures to be higher than they have been in at least 1,700 years and perhaps much longer.”

I read all this with acute cognitive dissonance. Apart from Steve McIntyre’s takedown of Mann et al. 2008 and the PAGES 2K Consortium (for the latest, see PAGES2K: North American Tree Ring Proxies), how can you ‘square’ high confidence with “there are still many uncertainties in understanding the hemispheric and global changes in climate over Earth’s history, including that of the last few millennia”?

Further, Chapter 5 of the AR5 includes pages on uncertainties in temperature reconstructions for the past 2000 years (section 5.3.5.2, pp 411-412). Here are a few choice quotes.

“Reconstructing NH, SH or global-mean temperature variations over the last 2000 years remains a challenge due to limitations of spatial sampling, uncertainties in individual proxy records and challenges associated with the statistical methods used to calibrate and integrate multi-proxy information …A key finding is that the methods used for many published reconstructions can underestimate the amplitude of the low-frequency variability …data are still sparse in the tropics, SH and over the oceans …Limitations in proxy data and reconstruction methods suggest that published uncertainties will underestimate the full range of uncertainties of large-scale temperature reconstructions.”

How does all this even justify the AR5’s ‘medium’ confidence level? …I next wondered: exactly who were the paleoclimate experts that came up with this stuff? Here is the author list for Chapter 1.

Wuebbles, D.J., D.R. Easterling, K. Hayhoe, T. Knutson, R.E. Kopp, J.P. Kossin, K.E. Kunkel, A.N. LeGrande, C. Mears, W.V. Sweet, P.C. Taylor, R.S. Vose, and M.F. Wehner.

I am familiar with half of these scientists (a few of them I have a great deal of respect for), somewhat familiar with another 25%, and unfamiliar with the rest. I looked these up to see which of them were the paleoclimate experts. There are only two authors (Kopp and LeGrande) that appear to have any expertise in paleoclimate, albeit on topics that don’t directly relate to the Key Finding. This is in contrast to an entire chapter in the IPCC AR5 being devoted to paleoclimate, with substantial expertise among the authors.

That is a big lapse, not having an expert on your author team related to one of six key findings. This isn’t to say that a non-expert can’t do a good job of assessing this topic with a sufficient level of effort. However the level of effort here didn’t seem to extend to reading the IPCC AR5 Chapter 5, particularly section 5.3.5.2.

Why wasn’t this caught by the reviewers? The NCA4 advertises an extensive in house and external review process, including the National Academies. …

Confidence guidance in the NCA4. {Important!}

Exactly what does the NCA4 mean by ‘high confidence’? The confidence assessment used in the NCA4 is essentially the same as that used in the IPCC AR5. From “About this report” in the NCA4 …

“Confidence in the validity of a finding based on the type, amount, quality, strength, and consistency of evidence (such as mechanistic understanding, theory, data, models, and expert judgment); the skill, range, and consistency of model projections; and the degree of agreement within the body of literature. …

“Assessments of confidence in the Key Findings are based on the expert judgment of the author team. Confidence should not be interpreted probabilistically, as it is distinct from statistical likelihood.“

These descriptions for each confidence category don’t make sense to me. The words ‘low’, ‘medium’ etc. seem at odds with the descriptions of the categories. Also, what happened to the ‘very low’ confidence category from the IPCC AR5? The AR5 uncertainty guidance doesn’t give verbal descriptions of the confidence categories, although it does include the following figure …

…The uncertainty guidance for the AR4 provides some insight into what is actually meant by these different confidence categories, although this quantitative specification was dropped for the AR5.

Well this table is certainly counterintuitive to my understanding of confidence. If someone told me that their conclusion had 1 or 2 chances out of 10 of being correct, I would have no confidence in that conclusion, and wonder why we are even talking about ‘confidence’ in this situation. ‘Medium confidence’ implies a conclusion that is ‘as likely as not;’ why have any confidence in this category of conclusions, when an opposing conclusion is equally likely to be correct?

Given the somewhat flaky guidance from the IPCC regarding confidence, the NCA4 confidence descriptions are a step in the right direction regarding clarity, but the categories defy the words used to describe them. For example:

‘High confidence’ is described as ‘Moderate evidence, medium consensus.’ The words ‘moderate’ and ‘medium’ sound like ‘medium confidence’ to me.

‘Medium confidence’ is described as ‘Suggestive evidence (a few sources, limited consistency, models incomplete, methods emerging); competing schools of thought.’ Sounds like ‘low confidence’ to me.

‘Low confidence’ is described as inconclusive evidence, disagreement or lack of opinions among experts. Sounds like ‘no confidence’ to me.

‘Very high confidence’ should be reserved for evidence where there is very little chance of the conclusion being reversed or whittled down by future research; findings that have stood the test of time and a number of different challenges.

As pointed out by Risbey and Kandlikar (2007), it is very difficult (and perhaps not very meaningful) to disentangle confidence from likelihood when the confidence level is medium or low. …Such misleading terminology contributes to misleading overconfidence in the conclusions – apart from the issue of the actual judgments that go into assigning a confidence level to one of these categories. …

Solutions to overconfidence.

I have written multiple blog posts previously on strategies for addressing overconfidence, including:

… The issue here is overconfidence of scientists and ‘systemic vice’ about policy-relevant science, where the overconfidence harms both the scientific and decision making processes. See these previous articles.

…The most disturbing point here is that overconfidence seems to ‘pay’ in terms of influence of an individual in political debates about science. There doesn’t seem to be much downside for the individuals/groups to eventually being proven wrong. So scientific overconfidence seems to be a victimless crime, with the only ‘victim’ being science itself and then the public who has to live with inappropriate decisions based on this overconfident information

So what are the implications of all this for understanding overconfidence in the IPCC and particularly the NCA? Cognitive biases in the context of an institutionalized consensus building process have arguably resulted in the consensus becoming increasingly confirmed in a self-reinforcing way, with ever growing confidence. The ‘merchants of doubt’ meme has motivated activist scientists (as well as the institutions that support and assess climate science) to downplay uncertainty and overhype confidence in the interests of motivating action on mitigation.

Read the full post to see her full analysis – and her recommendations.

—————————————-

Another perspective on these things: skepticism

Four out of the five top comic industry experts said Dilbert had no commercial potential. Five out of six doctors and specialists told me my voice problem was incurable. And every expert was wrong on nutrition for decades. — Scott Adams (@ScottAdamsSays) January 3, 2019

About Judith Curry

Judith Curry retired as a Professor of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. She is now President and co-owner of Climate Forecast Applications Network (CFAN). Prior to joining the faculty at Georgia Tech, she served on the faculties of the University of Colorado, Penn State University and Purdue University.

She has served on the NASA Advisory Council Earth Science Subcommittee, the DOE Biological and Environmental Science Advisory Committee, the National Academies Climate Research Committee, and Space Studies Board, and the NOAA Climate Working Group.

She is a Fellow of the American Meteorological Society, the American Association for the Advancement of Science, and the American Geophysical Union. Her views on climate change are best summarized by her Congressional testimony: Policy Relevant Climate Issues in Context, April 2013.

Follow Dr. Curry on Twitter at @curryja. Learn about her firm, CFAN, at their website.

This series about the corruption of climate science

The stakes are too high. We cannot afford this.

For More Information

If you liked this post, like us on Facebook and follow us on Twitter. For more information about this vital issue see all posts about uncertainties in climate science, about Judith Curry, about the keys to understanding climate change and especially these posts …

Alarmists worked hard to keep you from reading this book.

Alarmists have worked long and hard to discredit Roger Pielke Jr.’s, because he tells us about the IPCC and peer-reviewed research. Things that violate the “narrative” about our imminent doom.

They really do not want you to read the revised second edition of The Rightful Place of Science: Disasters & Climate Change

. See my review of the first edition. Here is the publisher’s summary …

“After nearly every hurricane, heatwave, drought, or other extreme weather event, commentators rush to link the disaster with climate change. But what does the science say?

“In this fully revised and updated edition of Disasters & Climate Change, renowned political scientist Roger Pielke Jr. takes a close look at the work of the Intergovernmental Panel on Climate Change, the underlying scientific research, and the climate data to give you the latest science on how climate change is related to extreme weather. What he finds may surprise you and raise questions about the role of science in political debates.”