We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.

We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.

We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.

The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.

On August 4, 1961, a young woman gave birth to a healthy baby boy in a hospital at 1611 Bingham St., Honolulu. That child, Barack Obama, later became the 44th president of the United States. Notwithstanding the incontrovertible evidence for the simple fact of his American birth—from a Hawaiian birth certificate to birth announcements in local papers to the fact that his pregnant mother went into the Honolulu hospital and left it cradling a baby—a group known as “birthers” claimed Obama had been born outside the United States and was therefore not eligible to assume the presidency. Even though the claims were met with skepticism by the media, polls at the time showed that they were widely believed by a sizable proportion of the public (Travis, 2010), including a majority of voters in Republican primary elections in 2011 (Barr, 2011).

In the United Kingdom, a 1998 study suggesting a link between a common childhood vaccine and autism generated considerable fear in the general public concerning the safety of the vaccine. The UK Department of Health and several other health organizations immediately pointed to the lack of evidence for such claims and urged parents not to reject the vaccine. The media subsequently widely reported that none of the original claims had been substantiated. Nonetheless, in 2002, between 20% and 25% of the public continued to believe in the vaccine-autism link, and a further 39% to 53% continued to believe there was equal evidence on both sides of the debate (Hargreaves, Lewis, & Speers, 2003). More worryingly still, a substantial number of health professionals continued to believe the unsubstantiated claims (Petrovic, Roberts, & Ramsay, 2001). Ultimately, it emerged that the first author of the study had failed to disclose a significant conflict of interest; thereafter, most of the coauthors distanced themselves from the study, the journal officially retracted the article, and the first author was eventually found guilty of misconduct and lost his license to practice medicine (Colgrove & Bayer, 2005; Larson, Cooper, Eskola, Katz, & Ratzan, 2011).

Another particularly well-documented case of the persistence of mistaken beliefs despite extensive corrective efforts involves the decades-long deceptive advertising for Listerine mouthwash in the U.S. Advertisements for Listerine had falsely claimed for more than 50 years that the product helped prevent or reduce the severity of colds and sore throats. After a long legal battle, the U.S. Federal Trade Commission mandated corrective advertising that explicitly withdrew the deceptive claims. For 16 months between 1978 and 1980, the company ran an ad campaign in which the cold-related claims were retracted in 5-second disclosures midway through 30-second TV spots. Notwithstanding a $10 million budget, the campaign was only moderately successful (Wilkie, McNeill, & Mazis, 1984). Using a cross-sectional comparison of nationally representative samples at various points during the corrective campaign, a telephone survey by Armstrong, Gural, and Russ (1983) did reveal a significant reduction in consumers’ belief that Listerine could alleviate colds, but overall levels of acceptance of the false claim remained high. For example, 42% of Listerine users continued to believe that the product was still promoted as an effective cold remedy, and more than half (57%) reported that the product’s presumed medicinal effects were a key factor in their purchasing decision (compared with 15% of consumers of a competing product).

Those results underscore the difficulties of correcting widespread belief in misinformation. These difficulties arise from two distinct factors. First, there are cognitive variables within each person that render misinformation “sticky.” We focus primarily on those variables in this article. The second factor is purely pragmatic, and it relates to the ability to reach the target audience. The real-life Listerine quasi-experiment is particularly informative in this regard, because its effectiveness was limited even though the company had a fairly large budget for disseminating corrective information.

What causes the persistence of erroneous beliefs in sizable segments of the population? Assuming corrective information has been received, why does misinformation1 continue to influence people’s thinking despite clear retractions? The literature on these issues is extensive and complex, but it permits several reasonably clear conclusions, which we present in the remainder of this article. Psychological science has much light to shed onto the cognitive processes with which individuals process, acquire, and update information.

We focus primarily on individual-level cognitive processes as they relate to misinformation. However, a discussion of the continued influence of misinformation cannot be complete without addressing the societal mechanisms that give rise to the persistence of false beliefs in large segments of the population. Understanding why one might reject evidence about President Obama’s place of birth is a matter of individual cognition; however, understanding why more than half of Republican primary voters expressed doubt about the president’s birthplace (Barr, 2011) requires a consideration of not only why individuals cling to misinformation, but also how information—especially false information—is disseminated through society. We therefore begin our analysis at the societal level, first by highlighting the societal costs of widespread misinformation, and then by turning to the societal processes that permit its spread.

From Individual Cognition to Debiasing Strategies We now turn to the individual-level cognitive processes that are involved in the acquisition and persistence of misinformation. In the remainder of the article, we address the following points: We begin by considering how people assess the truth of a statement: What makes people believe certain things, but not others? Once people have acquired information and believe in it, why do corrections and retractions so often fail? Worse yet, why can attempts at retraction backfire, entrenching belief in misinformation rather than reducing it? After addressing these questions, we survey the successful techniques by which the impact of misinformation can be reduced. We then discuss how, in matters of public and political import, people’s personal worldviews, or ideology, can play a crucial role in preventing debiasing, and we examine how these difficulties arise and whether they can be overcome. Finally, we condense our discussion into specific recommendations for practitioners and consider some ethical implications and practical limitations of debiasing efforts in general.

Debiasing in an Open Society Knowledge about the processes underlying the persistence of misinformation and about how misinformation effects can be avoided or reduced is of obvious public interest. Today, information is circulated at a faster pace and in greater amounts than ever before in society, and demonstrably false beliefs continue to find traction in sizable segments of the populace. The development of workable debiasing and retraction techniques, such as those reviewed here, is thus of considerable practical importance. Encouraging precedents for the effectiveness of using such techniques on a large scale have been reported in Rwanda (e.g., Paluck, 2009), where a controlled, yearlong field experiment revealed that a radio soap opera built around messages of reducing intergroup prejudice, violence, and survivors’ trauma altered listeners’ perceptions of social norms and their behavior—albeit not their beliefs—in comparison with a control group exposed to a health-focused soap opera. This field study confirmed that large-scale change can be achieved using conventional media. (Paluck’s experiment involved delivery of the program via tape recorders, but this was for reasons of experimental control and convenience, and it closely mimicked the way in which radio programs are traditionally consumed by Rwandans.) Concise recommendations for practitioners The literature we have reviewed thus far may appear kaleidoscopic in its complexity. Indeed, a full assessment of the debiasing literature must consider numerous nuances and subtleties, which we aimed to cover in the preceding sections. However, it is nonetheless possible to condense the core existing knowledge about debiasing into a limited set of recommendations that can be of use to practitioners.3 We summarize the main points from the literature in Figure 1 and in the following list of recommendations: Consider what gaps in people’s mental event models are created by debunking and fill them using an alternative explanation.

Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.

To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.

Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.

Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.

Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.

If you must present evidence that is threatening to the audience’s worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation.

You can also circumvent the role of the audience’s worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing. Download Open in new tab Download in PowerPoint

Concluding Remarks: Psychosocial, Ethical, and Practical Implications We conclude by discussing how misinformation effects can be reconciled with the notion of human rationality, before addressing some limitations and ethical considerations surrounding debiasing and point to an alternative behavioral approach for counteracting the effects of misinformation. Thus far, we have reviewed copious evidence about people’s inability to update their memories in light of corrective information and have shown how worldview can override fact and corrections can backfire. One might be tempted to conclude from those findings that people are somehow characteristically irrational, or cognitively “insufficient.” We caution against that conclusion. Jern, Chang, and Kemp (2009) presented a model of belief polarization (which, as we noted earlier, is related to the continued influence of misinformation) that was instantiated within a Bayesian network. A Bayesian network captures causal relations among a set of variables: In a psychological context, it can capture the role of hidden psychological variables—for example, during belief updating. Instead of assuming that people consider the likelihood that hypothesis is true only in light of the information presented, a Bayesian network accounts for the fact that people may rely on other “hidden” variables, such as the degree to which they trust an information source (e.g., peer-reviewed literature). Jern et al. (2009) showed that when these hidden variables are taken into account, Bayesian networks can capture behavior that at first glance might appear irrational—such as behavior in line with the backfire effects reviewed earlier. Although this research can only be considered suggestive at present, people’s rejection of corrective information may arguably represent a normatively rational integration of prior biases with new information. Concerning the limitations of debiasing, there are several ethical and practical issues to consider. First, the application of any debiasing technique raises important ethical questions: While it is in the public interest to ensure that the population is well-informed, debiasing techniques can similarly be used to further misinform people. Correcting misinformation is cognitively indistinguishable from misinforming people to replace their preexisting correct beliefs. It follows that it is important for the general public to have a basic understanding of misinformation effects: Widespread awareness of the fact that people may “throw mud” because they know it will “stick” is an important aspect of developing a healthy sense of public skepticism that will contribute to a well-informed populace. Second, there are situations in which applying debiasing strategies is not advisable for reasons of efficiency. In our discussion of the worldview backfire effect, we argued that debiasing will be more effective for people who do not hold strong beliefs concerning the misinformation: In people who strongly believe in a piece of misinformation for ideological reasons, a retraction can in fact do more harm than good by ironically strengthening the misbelief. In such cases, particularly when the debiasing cannot be framed in a worldview-congruent manner, debiasing may not be a good strategy. An alternative approach for dealing with pervasive misinformation is thus to ignore the misinformation altogether and seek more direct behavioral interventions. Behavioral economists have developed “nudging” techniques that can encourage people to make certain decisions over others, without preventing them from making a free choice (e.g., Thaler & Sunstein, 2008). For example, it no longer matters whether people are misinformed about climate science if they adopt ecologically friendly behaviors, such as by driving low- emission vehicles, in response to “nudges,” such as tax credits. Despite suggestions that even these nudges can be rendered ineffective by people’s worldviews (Costa & Kahn, 2010; Lapinski, Rimal, DeVries, & Lee, 2007), this approach has considerable promise. Unlike debiasing techniques, behavioral interventions involve the explicit design of choice architectures to facilitate a desired outcome. For example, it has been shown that organ-donation rates in countries in which people have to “opt in” by explicitly stating their willingness to donate hover around 15–20%, compared to over 90% in countries in which people must “opt out” (E. J. Johnson & Goldstein, 2003). The fact that the design process for such choice architectures can be entirely transparent and subject to public and legislative scrutiny lessens any potential ethical implications. A further advantage of the nudging approach is that its effects are not tied to a specific delivery vehicle, which may fail to reach target audiences. Thus, whereas debiasing requires that the target audience receive the corrective information—a potentially daunting obstacle—the design of choice architectures automatically reaches any person who is making a relevant choice. We therefore see three situations in which nudging seems particularly applicable. First, when behavior changes need to occur quickly and across entire populations in order to prevent negative consequences, nudging may be the strategy of choice (cf. the Montreal Protocol to rapidly phase out CFCs to protect the ozone layer; e.g., Gareau, 2010). Second, as discussed in the previous section, nudging may offer an alternative to debiasing when ideology is likely to prevent the success of debiasing strategies. Finally, nudging may be the only viable option in situations that involve organized efforts to deliberately misinform people—that is, when the dissemination of misinformation is programmatic (a case we reviewed at the outset of this article, using the examples of misinformation about tobacco smoke and climate change). In this context, the persistence with which vested interests can pursue misinformation is notable: After decades of denying the link between smoking and lung cancer, the tobacco industry’s hired experts have opened a new line of testimony by arguing in court that even after the U.S. Surgeon General’s conclusion that tobacco was a major cause of death and injury in 1964, there was still “room for responsible disagreement” (Proctor, 2004). Arguably, this position is intended to replace one set of well-orchestrated misinformation—that tobacco does not kill—with another convenient myth—that the tobacco industry did not know it. Spreading doubts by referring to the uncertainty of scientific conclusions—whether about smoking, climate change, or GM foods—is a very popular strategy for misinforming the populace (Oreskes & Conway, 2010). For laypeople, the magnitude of uncertainty does not matter much as long as it is believed to be meaningful. In addition to investigating the cognitive mechanisms of misinformation effects, researchers interested in misinformation would be well advised to monitor such sociopolitical developments in order to better understand why certain misinformation can gain traction and persist in society.

Acknowledgements The first two authors contributed equally to the paper.

Declaration of Conflicting Interests

The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article. Funding

Preparation of this paper was facilitated by Discovery Grants DP0770666 and DP110101266 from the Australian Research Council and by an Australian Professorial Fellowship and an Australian Postdoctoral Fellowship to the first and second author, respectively.

Notes 1.

We use the term “misinformation” here to refer to any piece of information that is initially processed as valid but that is subsequently retracted or corrected. This is in contrast to so-called post-event misinformation, the literature on which has been reviewed extensively elsewhere (e.g., Ayers & Reder, 1998, Loftus, 2005) and has focused on the effects of suggestive and misleading information presented to witnesses after an event. 2.

There is ongoing debate about whether the effects of worldview during information processing are more prevalent among conservatives than liberals (e.g., Greenberg & Jonas, 2003; Jost, Glaser, Kruglanski, & Sulloway, 2003a; Jost, Glaser, Kruglanski, & Sulloway, 2003b). This debate is informative and important but not directly relevant in this context. We are concerned with the existence of worldview-based effects on information processing irrespective of their partisan origin, given that misinformation effects are generic. 3.

Two of the authors of this article (Cook & Lewandowsky, 2011) have prepared a practitioner’s guide to debiasing that, in 7 pages, summarizes the facets of the literature that are particularly relevant to practitioners (e.g., scientists and journalists). The booklet is available for free download in several languages (English, Dutch, German, and French as of July 2012) at http://sks.to/debunk, and can be considered an “executive summary” of the material in this article for practitioners.