Imagine that you are a member of an advising panel, where your task is to assess the quality of a number of empirical studies. One study provides evidence that being raised by a same-sex couple increases the chances of suffering from certain developmental disorders. This hypothesis does not apparently carry any normative content and can be objectively tested, yet you may find it morally offensive, because some of your stereotypes and prejudices may be triggered upon reading about it. You are aware of these facts, and yet remain convinced that your personal values will not influence your appraisal of the quality of the evidence gathered for the hypothesis. You are confident, furthermore, that the fact that you have a monetary incentive to properly assess how the evidence bears on the hypothesis will not make any significant difference in your considered judgment either. But will these convictions of yours be borne out in practice? Will your personal values play no significant role in your assessment of the evidence?

The conviction that your appraisal of a scientific study will remain unaffected in the situation just described is consistent with a traditional view in philosophy of science that relies on the distinction between epistemic and non-epistemic values (Kuhn 1977; Laudan 1984). According to this view, the assessment of how evidence bears on scientific hypotheses should be directed by epistemic values. Your judgments about what the evidence is for a certain hypothesis and how strong the evidence supports the hypothesis should be affected only by truth-conducive, epistemic values, such as confirmation, empirical accuracy, and predictive and explanatory power.

Non-epistemic values, including moral and economic values, may enter into other stages of scientific practice (Machamer and Wolters 2004). Non-epistemic values may influence scientists’ choices about which research questions to work on; they may affect policy-makers’ decisions about how scientific results are to be used; and they may impact funding agencies’ judgments about which research projects deserve financial support. But, as maintained by many philosophers of science, the influence of non-epistemic values on these choices “is clearly not sufficient, by itself, to deprive the social or the natural sciences of their value-free character from a cognitive point of view” (Dorato 2004, p. 56; see Longino 1996 for a criticism of the epistemic vs. non-epistemic value dichotomy).

According to the ideal of the value-free character of science, scientists should strive to minimize the influence of non-epistemic values on their assessment of the evidence for a hypothesis; in particular, “propositions about what states of affairs are desirable or deplorable could never be evidence that things are, or are not, so” (Haack 1993, p. 35). Although non-epistemic values “may shape scientific knowledge to the extent that they play a role in the definition of research programs, in the choice of questions deemed scientifically interesting, in the way scientific results might be applied etc., this contextualization of the goals of science does not in itself threaten objectivity. More epistemologically challenging is the distinct charge that the very content of scientific knowledge is shaped by contextual values” (Ruphy 2006, pp. 189–90). In essence, the idea is that scientific reasoning is objective to the extent that the appraisal of scientific hypotheses, which contributes to producing scientific knowledge, is not influenced by non-epistemic values, but only by the available evidence.

This idea can be better qualified, after we reflect that the evidence relation is a three-place relation that connects data, hypotheses and background knowledge. If one’s background knowledge includes information about the moral consequences of believing that a hypothesis is true (or is not true), then non-epistemic values can sometimes affect one’s assessment of the evidence for that hypothesis (Sober 2007). For instance, the non-moral proposition “Drug X is safe.” is evidentially related to the moral proposition “Good consequences accrue to the patients.” when it is also believed that the patients’ physician is competent and well meaning. In cases like this one, background knowledge allows for moral and non-moral propositions to be evidentially related (i.e., one raises the probability of the other).

Background knowledge also allows for the non-epistemic moral values that determine the expected utility of believing that a certain hypothesis is true (e.g., believing that drug X is safe) to provide a non-trivial lower or upper bound on the probability that the hypothesis is true (e.g., that drug X is safe) (Sober 2007, pp. 114–5). But, in this type of case, we should already possess information about the probability that the hypothesis is true in order to answer the question of whether believing the hypothesis has better moral consequences (i.e., higher expected utility) than not believing it. So, “judgments about the ethical consequences of believing a proposition cannot supply new information about the probability of the proposition” (Sober 2007, p. 117). The fact that believing a certain hypothesis has good or bad moral, social or political consequences cannot provide new evidence for or against the hypothesis’s being true.

The ideal of value-free science and objectivity in scientific reasoning can then be reformulated as the idea that non-epistemic values should not affect the appraisal of the relation between hypothesis, data, and background knowledge. In particular, moral information should not affect the assessment of the evidence available for a hypothesis over and above the hypothesis’s prior credibility.

Although the value-free ideal makes a normative claim, one obvious question is whether this normative ideal is actually attainable. In recent literature in philosophy, history and sociology of science, it has been argued that both epistemic and non-epistemic values are crucial for assessing what counts as sufficient evidence, and that social, political and economic structures influence the practice of science (e.g., Douglas 2009; Elliott 2011; Longino 1990; Reiss and Sprenger 2014, Sec 3; Resnik 2007).

Less attention has been paid to the psychology of scientific reasoning, and in particular to the psychological attainability of objective, value-free scientific reasoning. This is surprising, since reasoning and valuing are obviously psychological processes.

If reasoners systematically use moral values to gain new information about the probability of a given scientific hypothesis, then that would be strong evidence that directly speaks against the attainability of the value-free ideal. Insisting that science should be free of non-epistemic values, when human reasoners cannot achieve value-freedom, would perpetuate a myth “that interferes with the public’s understanding of the scientific process and may, paradoxically, undermine the public’s trust in science” (Elliott and Resnik 2014, p. 648).

While the social and institutional character of science might compensate for some of the effects of moral values on scientific reasoning (e.g., Longino 1990; Popper 1934), it would make little sense to calling for scientists to even try to achieve the value-free ideal. Instead, it may be more important to investigate the psychological mechanisms of scientific reasoning, and examine more closely what kinds of social and institutional settings promote sound scientific reasoning.

Questions about the psychological attainability of objectivity in scientific reasoning are not new. In his 1620 Novum Organum, Francis Bacon already recognized that scientific reasoning can be systematically misled by several kinds of “idols.” In particular, Bacon foreshadowed that motivational factors that have little to do with epistemic value and objectivity are powerful determinants of scientific reasoning and judgment. Writes Bacon: “the human understanding resembles not a dry light, but admits a tincture of the will and passions, which generate their own system accordingly: for man always believes more readily that which he prefers” (Sect. 49). Bacon’s contention has been substantiated by a large body of empirical results in psychology (see e.g., Kunda 1990), which has been hardly discussed in philosophy of science in relation to the attainability of the value-free ideal.

One finding particularly relevant to these debates is that reasoning processes are influenced by several motivational factors. Two kinds of motivational factors are associated with accuracy goals and directional goals. Accuracy goals motivate reasoners to “arrive at an accurate conclusion, whatever it may be,” whereas directional goals motivate them to “arrive at a particular, directional conclusion” regardless of its accuracy (Kunda 1990, p. 480). Both kinds of motivations have been found to affect reasoning and judgment in a variety of tasks.

For example, Lord et al. (1979) famously provided evidence that people tend to interpret ambiguous scientific evidence as supporting their favored conclusion. In their study, participants were presented with two mock scientific reports concerning the effectiveness of death penalty in deterring crime. While one report provided supporting evidence for the deterrent efficacy of the death penalty, the other provided disconfirming evidence. Participants’ prior convictions about death penalty were found to predict their explanatory judgments. Both proponents and opponents of capital punishment rated the report that agreed with their prior convictions as more convincing, and were more adept at finding flaws in the report that disagreed with their prior convictions. As a result, the mixed evidence from the two reports led participants to become even more certain of their pre-existing beliefs regarding the efficacy of capital punishment.

Along the same lines, it has been shown that both scientists’ and laypeople’s explanatory judgments about the quality of the results and methodological soundness of a piece of experimental research are predicted by their prior beliefs. Experimental research is rated as higher-quality and methodologically sound, when the experimental results conform to their prior beliefs (Koehler 1993; Greenhoot et al. 2004).

Further experimental results have confirmed that judgment about scientific evidence is often biased in subtle and intricate ways (MacCoun 1998). We tend to assess scientific reports about the validity of a psychological test as more or less reliable as a function of our (good or bad) performance on the test (Wyer and Frey 1983; Pyszczynski et al. 1985). We disbelieve alleged medical evidence that suggests that certain behaviour has negative health consequences, if we routinely engage in that behaviour (Kunda 1987). We often employ less rigorous standards of assessment for information that favors our preferred conclusions than for information that we find undesirable (Ditto and Lopez 1992). More generally, motivational states that have no obvious epistemic value can influence many of our beliefs about the world (e.g., Norton et al. 2004; Uhlmann and Cohen 2005; Balcetis and Dunning 2006; Harris et al. 2009; Lewandowsky et al. 2013; see also Krizan and Windschitl 2007 for an evaluation of the literature on the desirability bias).

These findings suggest that the evaluation of scientific evidence may be biased by the extent to which its conclusions are found desirable. However, these studies provide weak support for the claim that non-epistemic values systematically affect the appraisal of the relation between a scientific hypothesis, data, and background knowledge, because they did not control for hypotheses’ prior credibility and did not assess the extent to which accuracy incentives can mitigate the effect of directional goals. So, they do not speak directly as to what extent the perceived moral offensiveness of a scientific hypothesis can bias one’s assessment of the evidence available for the hypothesis over and above the hypothesis’s prior credibility.

It may be supposed that it is unlikely that the experimental participants in at least some of the studies reviewed above had any prior opinion relevant to the task or needed any special incentive to be accurate; and so it may be supposed that it would have been superfluous to control for hypotheses’ prior credibility or to manipulate accuracy goals. This supposition might be correct, yet the available empirical evidence does not support it.

In what follows, we report two studies where we asked exactly how the prior credibility of scientific hypotheses, their perceived moral offensiveness, and the motivation to be accurate in judging their explanatory power affect one’s assessment of putative scientific reports. We controlled for the prior credibility of the hypotheses contained in the scientific reports, and we manipulated accuracy incentives. Thus, our studies contribute to advance current literature in the philosophy and psychology of scientific reasoning by providing strong evidence that non-epistemic values systematically and robustly affect the appraisal of the relation between hypothesis, data, and background knowledge.

Specifically, Study 1 tested whether differences in scientific hypotheses’ perceived moral offensiveness predict differences in explanatory judgments, even when the prior credibility of the hypotheses is controlled for. Study 2 tested whether a monetary incentive to be accurate in the assessment of the evidence has a mitigating effect on the impact of the perceived moral offensiveness of a hypothesis on explanatory judgments about the hypothesis.

Overall, our results show that explanatory judgments about a scientific hypothesis are robustly sensitive to the perceived moral offensiveness of the hypothesis. This finding directly supports the idea that one’s assessment of the evidence in support of a scientific hypothesis can be systematically affected by judgments about the moral value of the hypothesis, which suggests that scientific reasoning is imbued with non-epistemic values.

The rest of the paper is structured as follows. Section 2 describes a preliminary test we ran on the experimental material that was used in our two studies. Section 3 and Section 4 present our two studies. Section 5 puts the results into a broader philosophical perspective and discusses their implications for the psychology of explanatory reasoning and for the ideal of a value-free science. The Conclusion summarises our contribution to current literature and traces three avenues for further research.