by Judith Curry

Motivated reasoning affects scientists as it does other groups in society, although it is often pretended that scientists somehow escape this predicament.

Motivated reasoning has been put forward as the reason why educated conservatives reject the consensus on climate change science. This post examines the thesis that motivated reasoning by climate scientists is adversely impacting the public trust in climate science and provides a reason for people to reject the consensus on climate change science.

Microethics vs Macroethics

I have had a draft post on Microethics vs Macroethics sitting around for almost a year. Ideally, I should have completed that post before this one, but an email exchange with Dan Kahan motivated me to write this post instead. So here is a quick overview of my points re microethics vs macroethics. This particular framing of the ethical dilemmas for research scientists came to my attention in context of materials that have been provided to universities in support of training for responsible conduct in research. Research scientists all have the responsibilities to adhere to the principles of ethical research and professional standards as outlined in the document On being a scientist. But what happens when other responsibilities get in the way of these professional standards?

As a researcher, what kinds of responsibilities do you have to

your conscience (micro)

your colleagues (micro)

institutions (micro/macro)

the public (macro)

the environment (macro)

One can imagine many different types of motivated reasoning across this spectrum of micro/macro ethical responsibilities that can either bias the scientific process or even violate professional standards. Climate science has many examples to provide in this regard.

Nobel causes

Scientists may either bias their research in favor of concerns about public policy and the environment in subtle ways, or they may actively work to suppress evidence, and in some instances they may proactively manufacture evidence to discredit their opponents.

To start: Reiner Grundmann at Die Klimazweibel has a recent article entitled Science for a good cause? Excerpts:

Imagine the following scenario. An atmospheric scientist makes a discovery that seems to challenge a particular model of sea level increase due to global warming. She expects her discovery will be refined through further research, and that, in the end, it will not refute the mainstream view. In the meantime, she wants to avoid giving ammunition to climate skeptics, so she postpones publication. But an ambitious postdoc surreptitiously informs the media about the discovery. The media accuse the scientist of a cover-up and report that key evidence for anthropogenic climate change has been refuted.



How would you react if someone concludes in the following way: ‘The atmospheric scientist was not wrong to withhold the information from the public; she wisely foresaw the danger that it would be deployed in misleading ways and attempted to do her bit for the promotion of public freedom’.

This is not a scenario invented by myself, but by the philosopher of science Philip Kitcher, recounted in a review of his book by Mark Brown. (Science in a Democratic Society, Prometheus Books, Amherst, New York, 2011; review article by Mark Brown, published in Minerva (51:389–397; DOI 10.1007/s11024-013-9233-y).

In my view this comment exemplifies a problematic attitude not only in climate science but in the social sciences as well. The good cause which allegedly motivates much of the research puts the researcher in a special position. It allows them to dispense with essential standards of professional conduct. It is perhaps not remarkable that we see a ‘leading figure’ in the philosophy of science defend questionable practices which have been modelled (not by accident I suppose) after the famous climategate affair.

The risks for the credibility of science (no matter which branch or discipline) are clear. Anyone who comes across such commentary will take this as confirmation that science can be twisted according to the will of scientists (or elites); that science is constructed (in the vulgar sense of being ‘made up’ and ‘fake’); and that scientists preserve the prerogative of making judgements which data are for public consumption and which are not.

As I pointed out in a recent talk, motivated reasoning is a problem for scientists. It affects scientists as it does other groups in society, although it is often pretended that scientists somehow escape this predicament. The above comment from Kitcher (‘the atmospheric scientists was not wrong to withhold the information from the public’) is a powerful illustration of social scientists falling into the trap of motivated reasoning, justifying the questionable professional standards through recourse to alleged higher ethical standards.

Scientists will only be able to command trust in society if they follow basic professional standards. Prime among them is to publish the results of their research, no matter if they support a desirable storyline or not.

Last year, I encountered a stark example of this. One of my colleagues was thinking about publishing a paper that challenges the IPCC interpretation of the previous pause during the 1940s to 1970’s. My colleague sent a .ppt presentation on this topic to three colleagues, each of whom is a very respected senior scientist and none of whom have been particularly vocal advocates on the subject of climate change (names are withheld to protect the guilty/innocent). Each of these scientists strongly encouraged my colleague NOT to publish this paper, since it would only provide fodder for the skeptics. (Note: my colleague has not yet written this paper, but not because he was discouraged by these colleagues).

What is at issue here is a conflict between the micro ethics of individual responsibility for responsible conduct of research and larger ethical issues associated with the well-being of the public and the environment. Most such examples are related to suppression of evidence including attempting to stifle skeptical research (particularly its publication and dissemination to the public); the Climategate emails provide abundant examples of this.

A more pro-active example of this conflict is the curious case of Peter Gleick and the Heartland Affair. On my post Gleick’s integrity, I wrote:

Gleick’s ‘integrity’ seems to have nothing to do with scientific integrity, but rather loyalty to and consistency with what I have called the UNFCCC/IPCC ideology.

When ‘Heartlandgate’ first broke, I saw no parallels with Climategate. Now, with the involvement of Gleick, there most certainly are parallels. There is the common theme of climate scientists compromising personal and professional ethics, integrity, and responsibility, all in the interests of a ’cause’.

Fuller and Mosher’s book Climategate: The CruTape Letters argued that ‘noble cause corruption’ was a primary motivation behind the Climategate deceits. Noble cause corruption is when the ends (noble) justify the means (ignoble). I think that there is an element of this that can be seen in the Climategate emails, but I think the motivated reasoning by climate scientists is more complex (and ultimately less ‘noble’).

Institutional loyalties

In the early days of this blog, one of my more controversial essays was Reversing the positive feedback loop, which lays out motivated reasoning associated with institutional loyalties. Excerpts (with some slightly toned down wording):

Once the UNFCCC treaty was a done deal, the IPCC and its scientific conclusions were set on a track to become a self fulfilling prophecy. The entire framing of the IPCC was designed around identifying sufficient evidence so that the human-induced greenhouse warming could be declared unequivocal, and so providing the rationale for developing the political will to implement and enforce carbon stabilization targets. National and international science programs were funded to support the IPCC objectives.

Were [these] just hardworking scientists doing their best to address the impossible expectations of the policy makers? Well, many of them were. However, at the heart of the IPCC is a cadre of scientists whose careers have been made by the IPCC. These scientists have used the IPCC to jump the normal meritocracy process by which scientists achieve influence over the politics of science and policy. Not only has this brought some relatively unknown, inexperienced and possibly dubious people into positions of influence, but these people become vested in protecting the IPCC, which has become central to their own career and legitimizes playing power politics with their expertise.

When I refer to the IPCC dogma, it is the religious importance that the IPCC holds for this cadre of scientists; they will tolerate no dissent, and seek to trample and discredit anyone who challenges the IPCC. Some are mid to late career middle ranking scientists who have done ok in terms of the academic meritocracy. Others were still graduate students when they were appointed as lead authors for the IPCC. These scientists have used to IPCC to gain a seat at the “big tables” where they can play power politics with the collective expertise of the IPCC, to obtain personal publicity, and to advance their careers. This advancement of their careers is done with the complicity of the professional societies and the institutions that fund science. Eager for the publicity, high impact journals such as Nature, Science, and PNAS frequently publish sensational but dubious papers that support the climate alarm narrative.

Especially in the renascent subfields such as ecology and public health, these publications and the media attention help steer money in the direction of these scientists, which buys them loyalty from their institutions, who appreciate the publicity and the dollars.

Further, the institutions that support science use the publicity to argue for more funding to support climate research and its impacts. And the broader scientific community inadvertently becomes complicit in all this. While the IPCC proponents loudly cry out against the heretical skeptical scientists and the dark influences of big oil and right wing ideology that are anti-science, we all join in bemoaning these dark forces that are fighting a war against science, and support the IPCC against its critics.

So do I think IPCC scientists are policy advocates? They seem mainly concerned with preserving the importance of the IPCC, which has become central to their professional success, funding, and influence. Most don’t understand the policy process or the policy specifics; they view the policy as part an parcel of the IPCC dogma that must be protected and preserved at all cost, else their success, funding and influence will be in jeopardy.

Back in 2010, this post raised the ire of a number of people. My response to people that were angered by my post: ‘If the shoe fits, wear it; if it doesn’t, don’t.’

The existence of an institutionalized consensus further complicates the issue, and an additional motivation comes into play. In my paper No consensus on consensus, I used this quote from Jean Goodwin:

“Once the consensus claim was made, scientists involved in the ongoing IPCC process had reasons not just to consider the scientific evidence, but to consider the possible effect of their statements on their ability to defend the consensus claim.”

Loyalty to colleagues

The issue of loyalty to colleagues came starkly to the forefront in response the release of the Climategate emails. I was criticized for my early essays by colleagues because talking about even the broad issues of uncertainty, transparency, losing trust etc was viewed as insensitive to the feelings of the individual scientists involved (and not helping the ’cause’). Jerry North stated publicly that he would not read the emails out of respect for the scientistists involved.

This issue was made very explicit by the title of the Scientific American article entitled Climate Heretic: Judith Curry turns against her Colleagues. Of all the issues raised by Climategate and the points I had been trying to make about overconfidence and uncertainty, transparency, engaging with skeptics, etc., the main issue of interest in all this was construed as me turning against my colleagues? It was hard for me to understand this at first, but then I realized that by talking about uncertainty and engaging with skeptics that I was following the playbook according to the merchants of doubt meme. So talking about topics that I regarded as efforts that were needed to rebuild the credibility of climate science was regarded by my ‘colleagues’ as damaging to the consensus.

Early on in my statements about Climategate, I became aware that my statements were looked upon very unfavorably by some scientists, particularly those that were vocal advocates of the IPCC and UNFCCC policies. As an example, Peter Webster related a conversation at a professional meeting in 2010 with a young scientist who said something like: ‘You know, Judy is REALLY unpopular among the scientists at lab. I’m not sure, but I think she might be right. I can say that to you but of course I wouldn’t dare say that at the lab.’

I soon realized that by doing this, I was pretty much destroying any chance I might have had for further recognition/awards by professional societies such as the AGU. I also thought I risked unfavorable reviews of my papers and grant proposals (this has definitely not happened). I have become a minor hero to some for my advocacy of integrity in climate research. So does any of the above matter to me?

Last June, I encountered at a meeting an elected official of one of the major professional societies, who was not unsympathetic to my positions. He asked me: “I have wondered what possessed you to break loose from the mainstream opinions of the community, with potentially adverse professional consequences.” My response was that I was doing this because I thought it was the right thing to do, and that I thought that someone needed to stand up as an advocate for professional responsibility and integrity in climate research. And I inched into all this, with the adverse response from my ‘colleagues’ further justifying to me the need to do what I was doing. So in context of the microethical dilemmas, I went with my conscience, which told me to put professional responsibility and integrity ahead of the norms and desires of my colleagues and the institutions of climate science. It is still astonishing to me that there should be such a conflict.

I can understand how a personal conflict can arise between professional responsibility/integrity versus an environmental or social issue that the individual deems to be very important (scientists working on the atom bomb are an example here). But conflicts between professional responsibility/integrity versus loyalty to colleagues/institutions seems to me very difficult to justify in a way that is not self serving. The only non-self serving justification that I can think of (and one that I fell for for awhile) was solidarity in fighting against a ‘war on science.’ I now understand that there was a heck of lot of motivated reasoning in putting forward the ‘war on science’ argument.

My ‘ostracism’ from the IPCC advocacy ‘tribe’ has been noted by other scientists that are quietly sympathetic to my position. As an example, several years ago at a conference, one of the speakers was quite critical of one piece of the conventional IPCC wisdom, but prefaced the talk with the statement something like this: ‘While my talk contains some evidence that challenges some of the findings of the IPCC, I want to state up front that I support the IPCC consensus on climate change.’ After the talk, I asked this scientist why he felt the need to preface his talk with a statement of IPCC allegiance, when his research was rather devastating to part of the IPCC’s argument. He stated ‘I don’t want to have to put up with what you have had to, so I make it very clear that I support the IPCC consensus.’

Dan Kahan’s post (discussed on the previous Scientific Evidence thread) included a statement that I find to be particularly apt here:

But if I take the wrong position on the issue relative the one that predominates in my group, and I might well cost myself the trust and respect of many on whose support I depend, emotionally, materially and otherwise.

My treatment at the hands of the consensus police has apparently discouraged some other scientists from publicly following suit. On the other hand, perhaps I have helped to pave the way for the emergence of a Tamsin Edwards. It will be interesting to see how all this plays out. And all this is why I regard the institutionalization of climate tribalism such as evidenced by the recent AGU statement on climate change to be so pernicious to the field of climate science.

Impact on trust

On his latest post, Dan Kahan makes the following points:

Those points reduce to three:

Members of the public do trust scientists. Members of culturally opposing groups distrust each other when they perceive their status is at risk in debates over public policy. When facts become entangled in cultural status conflicts, members of opposing groups (all of whom do trust scientists) will form divergent perceptions of what scientists believe.

Here is some counter evidence regarding blanket trust in scientists. From an article entitled Responses of the legal order to the loss of trust in science:

It is doubtful that there is a general reduction in the public’s trust in science. [O]nly certain fields of scientific research are regarded as controversial, that is, primarily the biosciences, but also fields like the environment, reproductive medicine, communications technology and protection of privacy.

Losses of trust are valid not only for certain fields of science, but also for certain institutions, especially when political or economic partial interests impel such institutions to drive certain scientific developments whose advantages for the public are not clearly evident.

Liz Neely has a relevant post Advocacy and trust. Excerpts:

The best specific resource I know of is an E&E News article from last summer in which Paul Voosen covered some particularly relevant research by Jon Krosnick. Krosnick wanted to go beyond simplistic ideas of scientists being trusted or not trusted, and instead delve into the difference that advocacy messages make. Using footage of real climate scientists making public remarks, Krosnick was able to test the difference between a science-only message versus one of science + a “call to arms.” (Elegant, isn’t it?) He found it did make a significant difference… for some audiences:

For a cohort of 548 respondents who either had a household income under $50,000 or no more than a high school diploma, the results were stunning and statistically significant. Across the board, the move into politics undermined the science. The viewers’ trust in the scientist dropped 16 percentage points, from 48 to 32 percent. Their belief in the scientist’s accuracy fell from 47 to 36 percent. Their overall trust in all scientists went from 60 to 52 percent. Their belief that government should “do a lot” to stop warming fell from 62 to 49 percent. And their belief that humans have caused climate change fell 14 percentage points, from 81 to 67 percent.”

Those numbers knock me back, but… I haven’t seen the paper. I checked with Paul yesterday – to the best of our knowledge, it remains in pre-publication*, so I can’t speak to specifics. While I am cautious in running too far with this, it provides mounting evidence that different dynamics are governing different segments of “the public.” If we know that distrust in science increases with increasing education among political conservatives, and we know that advocacy increases distrust in science and scientists among those at the lower end of the socioeconomic spectrum, what does that mean?

I genuinely don’t know.

Concluding statements

This essay was motivated by a very interesting email conversation with Dan Kahan. I have put forward a thesis that is supported by some anecdotal evidence and arguments (names have been omitted to protect the innocent/guilty). This post is sort of a prolegomena to what I hope will be future studies that investigate the sociology and psychology of scientists and motivated reasoning, and its influence on public trust in science.

Climate change is arguably a unique case in all of science owing to magnitude of the socioeconomic impacts of both the problem and the proposed solutions and the massive institutionalization of a consensus that has been manufactured by the IPCC.

While there may be genuinely difficult ethical challenges associated with perceived noble causes, I am particularly concerned about microethical conflicts involving colleagues and scientific institutions that apparently justify self-serving irresponsible professional behavior, both by individuals and institutions. This seems much worse to me than politically motivated reasoning by members of the public.

I have no personal attachment to the hypotheses presented here; I fervently hope that someone can justifiably demonstrate that my thesis is incorrect. But to me there seems like a heck of a lot of evidence supporting my thesis, only a fraction of which can be included in a blog post.

Personally, I have felt the need to break loose of the shackles of loyalty to colleagues and institutions if it comes at the expense of integrity in science and professional conduct. I envy Richard Muller who comes at the issue of climate science without the baggage associated with loyalty to colleagues or institutions in the climate field; rather his colleagues are a very elite group of physicists. Muller’s approach of securing private funding and publishing his papers first on the internet has allowed him to avoid the schackles that I rather uncomfortably had to break away from. Private funding, the internet, and the emergence of scientists from outside the traditional community (not just Muller’s team but also Steve McIntyre, Nic Lewis etc.) bodes well for improving the integrity of climate science in the 21st century and diminishing the effectiveness of the consensus police.

But I am hoping this essay will promote some self reflection among climate scientists regarding their own ethical conflicts and values. Unfortunately, I suspect this essay will also trigger a backlash from the consensus police (absence of such a backlash would help disprove my thesis ha ha), but I am pretty much immune to all that at this point.