Background

Human cognition has shown to be prone to a biased interpretation of reality. People tend to believe falsely that they are better than others (Brown, 1986; Pronin, Lin, & Ross, 2002), that their own skills can determine their success in a purely chance task (Langer, 1975), or that certain bogus treatments they follow can miraculously cure their diseases (Matute, Yarritu, & Vadillo, 2011). These false beliefs, typically known as cognitive illusions, have often been related in the psychological literature with mental health and well‐being (Lefcourt, 1973; Taylor, 1989; Taylor & Brown, 1988). However, do cognitive illusions have beneficial consequences in all cases? Current discussion in the literature suggests that whereas biases and illusions can often contribute to adaptive adjustment, this is not always the case (see McKay & Dennett, 2009 for an extensive review).

One psychological approach states that cognitive illusions are an adaptive mechanism, ensuring the correct fitness of the person to the environment (Taylor & Brown, 1988). From this perspective, the cognitive system has evolved to interpret the world unrealistically, in a manner that assures the protection of the self. In this framework, illusions related to the perception of relationships between events, such as illusory correlations (Chapman & Chapman, 1969), illusions of control (Alloy & Abramson, 1979; Langer, 1975), or causal attributional biases (Kelley, 1972), are typically assumed to have an important role in psychological well‐being (Taylor & Brown, 1988). It has been argued that instead of interpreting the environmental information rationally, people tend to adjust the environmental data to their prior conceptualization of the world in a manner that is self‐serving (Fiske & Taylor, 1984; Lefcourt, 1973; Nisbett & Ross, 1980; Zuckerman, 1979). For instance, it has been found that the illusion of control, a bias by which people tend to overestimate their own control over uncontrollable outcomes (Langer, 1975), works differently as a function of mood, which has sometimes been interpreted as supporting its role as a self‐esteem protection mechanism. Whereas non‐depressive people view themselves as controlling outcomes which are actually uncontrollable (i.e., illusion of control), depressive people detect the absence of any relationship between their actions and the desired outcomes. This has been called depressive realism (Alloy & Abramson, 1979; Blanco, Matute, & Vadillo, 2012; Msetfi, Murphy, Simpson, & Kornbrot, 2005). Given that the perception of uncontrollability is related to helplessness and depression (Abramson, Seligman, & Teasdale, 1978), some researchers have suggested that either depressed people are depressed because they do not show an illusion of control, or they do not develop the illusion because they are depressed (Alloy & Clements, 1992). In either case, this is an example of how the illusion of control could be related to well‐being under this framework (but see Blanco et al., 2012; Msetfi et al., 2005 for more neutral interpretations of this illusion).

A rather different approach suggests that cognitive illusions are just the by‐products of a cognitive system which is responsible for extracting knowledge about the world (Beck & Forstmeier, 2007; Haselton & Nettle, 2006; Matute et al., 2011; Tversky & Kahneman, 1974). The discussion revolves nowadays around the benefits and costs of establishing false beliefs (Haselton & Nettle, 2006). From this point of view, cognitive illusions are not beneficial per se. Instead, they would be the necessary cost to be assumed by an overloaded cognitive system that tries to make sense of a vast amount of information (Tversky & Kahneman, 1974). The results of this assumable cost range from superstitious behaviour, magical thinking, or pseudoscientific beliefs (Matute, 1996; Matute et al., 2011; Ono, 1987; Vyse, 1997) to prejudice, stereotyped judgements, and extremism (Hamilton & Gifford, 1976; Lilienfeld, Ammirati, & Landfield, 2009). The previously mentioned self‐serving illusions (Taylor & Brown, 1988) would be interpreted as part of this cost under this view.

Therefore, while keeping in mind that the cognitive illusions can eventually lead to benefits related with psychological well‐being, there are also cases in which their collateral costs can lead to serious negative consequences. Take as an example, the person who develops the false belief that a pseudoscientific (i.e., innocuous, at best) treatment produces the recovery from a disease from which he or she is suffering. Believing that the pseudoscientific treatment is effective, that person could underestimate the effectiveness of a medical treatment which actually works. This bias could lead the person to reject the really effective treatment and, consequently, suffer the implications derived from this action. Or, in another example, if a person believes that a certain minority group has higher rates of delinquency, how could we convince that person, at the light of evidence, that his/her belief is not true? The two scenarios drawn here are examples that show that a false belief could, under certain conditions, interfere with the establishment of grounded, evidence‐based, knowledge.

Despite the theoretical and practical relevance of this problem, there are, to our knowledge, very few studies focusing on how illusory beliefs affect the acquisition of evidence‐based knowledge. One of the very few studies we are aware of is that of Chapman and Chapman (1969). They found that illusory correlations in the interpretation of projective tests could blind psychologists to the presence of valid correlations between symptoms. However, it is not clear in their study how the illusions were developed, nor the mechanism by which their occurrence could blind the detection of real correlations. While the applied nature of their study was certainly commendable, it implied that several aspects outside of experimental control, such as previous knowledge, credibility of the source from which the illusion was acquired, strength of the belief, or years psychologist had maintained the illusory belief, could, at least in principle, be affecting the results. Because their goal was highly applied, Chapman and Chapman did not create the different experimental conditions and manipulations over these illusory correlations, as they only selected the most frequent erroneous interpretations of a projective test. The main goal of the present work is to explore, in an experimental setting, the potential interference that illusory beliefs might exert over the subsequent learning of new evidence‐based knowledge, and to propose a broad mechanism by which this could occur.

Cognitive illusions usually involve beliefs about causal relationships between events that are, in fact, unrelated (i.e., illusion of causality). For instance, the illusion of control involves the belief that our own action (the potential cause) produces the occurrence of the desired goal (the outcome). The experimental literature on causal learning is a fruitful framework for studying these cognitive illusions (Matute et al., 2011). Many causal learning experiments have shown that learning about the relationship between a cause and an effect influences the subsequent learning of another cause that is paired with the same outcome. The family of learning phenomena known as cue interaction represents the way by which these effects occur. When two potential causes, A and B, are presented simultaneously and paired to an outcome, they compete for establishing a causal association with that outcome. In these cases, the existence of previous experience or previous knowledge about the relationship of one of the causes and the outcome determines what can be learned about the second cause. For instance, the learner may believe that one of the potential causes produces the outcome or, on the contrary, the learner may believe that one of the causes prevents the occurrence of the outcome. In both cases, this previous belief about one of the causes, say A, will affect what can be learned about the other cause, B, when both causes are presented together and followed by the outcome. In the first case, when the previous belief is that A is the cause of the outcome, the detection of a causal relationship between the second cause B and the outcome will be impaired (i.e., this particular case of cue interaction is generally known as the blocking effect; Kamin, 1968). In the second case, when the previous belief is that A prevents the outcome from occurring, the detection of a causal relationship between the second cause B and the outcome will be facilitated (i.e., this particular case of cue interaction is generally known as superconditioning; Rescorla, 1971). Many cue interaction experiments, both with animals and humans, show that learning about the relationship between a potential cause and an outcome can result altered when the potential cause is presented in compound with another potential cause that has been previously associated either with the outcome or its absence (Aitken, Larkin, & Dickinson, 2000; Arcediano, Matute, Escobar, & Miller, 2005; Dickinson, Shanks, & Evenden, 1984; Kamin, 1968; Luque, Flores, & Vadillo, 2013; Luque & Vadillo, 2011; Morís, Cobos, Luque, & López, 2014; Rescorla, 1971; Shanks, 1985).

Therefore, given that previous causal knowledge can interfere with the learning of new causal knowledge, and given that previous knowledge could in principle be illusory, a question of interest is whether the development of cognitive illusions could interfere with the development of new and evidence‐based causal knowledge. To answer this question, we designed the current experiment, using a standard contingency learning task (Wasserman, 1990). In our experiment, participants learned about the effectiveness of some medicines through observation of fictitious patients: The fictitious patients either took a medicine or not, and they either recovered from the crises produced by a fictitious disease or not (Fletcher et al., 2001; Matute, Arcediano, & Miller, 1996). The experiment was divided into two learning phases. In the first phase, participants were exposed to information that should induce the illusion that a medicine (Medicine A) that had no real effect on the patients' recovery was nevertheless effective. In this phase, two groups of participants differed in the information they received. For one group, the illusion was induced to be high and for the other was induced to be low (see 2). In the second phase, the ineffective medicine used in the first phase, Medicine A, was always presented in compound with a new medicine (Medicine B), which actually did have a curative effect over the patients' disease. The question was whether the acquisition of an illusory causal relationship between the (ineffective) Cause A and the outcome during Phase 1 would interfere with subsequent learning about the causal relationship between the potential (and in this case, actually effective) Cause B and the same outcome that was presented during Phase 2. We expected that the different degree of illusion about Medicine A induced in both groups during Phase 1 would lead participants of the two groups to assess the effectiveness of Medicine B (i.e., the effective one) differently at the end of Phase 2. More specifically, we expected that the group for which we induced higher illusions about the effectiveness of Medicine A should show greater difficulties than the other group in detecting that Medicine B was actually effective.