Two between-subjects factors were fully crossed. Factors were the strength of misinformation (one or three repetitions), and the strength of retraction (zero, one, or three repetitions). Additionally, a control group with no mention of volatile materials was tested.

Method

Participants

A total of 161 undergraduates from the University of Western Australia (108 females) participated and were randomly assigned to conditionsFootnote 1 (N = 23 per condition).

Stimuli

Participants received 17 messages, each printed on a separate page. The 0-MI control condition featured no statements referring to volatile materials, and obviously no retraction either. In the 1- and 3-MI conditions, a statement regarding the presence of volatile materials appeared once (Message 6) or three times (Messages 6, 7, and 8), respectively. In the 3-R conditions, the retraction was repeated three times (Messages 13, 14, and 15) as opposed to only once in the 1-R conditions (Message 14), and there was no retraction at all in the 0-R conditions. Repetitions were presented in different contexts (e.g., as a radio transmission from a police investigator/as information passed on to the fire captain/in a public radio announcement). This was done to make the repetitions appear more natural and to enhance their potential impact by increasing contextual variation (e.g., Verkoeijen et al., 2004). Scripts were of equal length in all conditions; filler messages were added where needed.

Procedure

Participants read the messages aloud at their own pace, without backtracking. After an unrelated 10-min distractor task, participants received an open-ended questionnaire, consisting of 10 causal inference questions (e.g., What could have caused the explosions?), 10 fact questions (e.g., What time was the fire eventually put out?), and two manipulation-check questions, targeting awareness of the retraction (e.g., Was any of the information in the story subsequently corrected or altered? And if so, what was it?), always administered in this order.

Results

Analysis focused on three dependent measures: the number of references to misinformation (i.e., the inference score), the accuracy of recall, and acknowledgment of the retraction. References to misinformation (viz. negligently stored volatile materials) were counted only if they were causal and uncontroverted.

Coding procedure

Responses were tallied by a naive scorer following a scoring guide. Inter-rater reliability with a second scorer was high (rs = .97, .82, and .94 for fact-recall, inference, and manipulation-check scores, respectively, based on a sample of 18 questionnaires).

Inferences

Mean inference scores are shown in Fig. 1. The 0-MI and 0-R conditions provide empirical baselines for interpretation of the remaining experimental conditions and are represented by the dotted lines. The 0-MI control condition expectedly yielded few spontaneous references to misinformation (significantly fewer than those in the 1-MI/3-R condition, which was expected to have the lowest score of the remaining conditions; all contrasts are given in Table 1).

Fig. 1 Mean number of references to misinformation for all conditions in Experiment 1; error bars show standard errors of the mean. Black rectangles indicate predictions of the sampling model (cf. General Discussion). The depicted predictions are based on 1,000 replications with best-fitting parameter estimates α = 11.66, λ = 0.99, and ϕ = 0.61, and root mean square deviation = 0.32. 0-MI, no misinformation control condition; 1-MI, misinformation presented once; 3-MI, misinformation presented three times; 0-R, no retraction control conditions; 1-R, retraction presented once; 3-R, retraction presented three times Full size image

Table 1 Contrasts calculated on inference scores in Experiment 1 Full size table

A two-way ANOVA on the six experimental conditions yielded significant main effects of strength of misinformation, F(1, 132) = 7.32, p < .01, η2 = .05, and strength of retraction, F(2, 132) = 45.02, p < .001, η2 = .41, which were qualified by a marginally significant interaction, F(2, 132) = 2.74, p = .07, η2 = .04. Planned contrasts (cf. Table 1) demonstrated that, not surprisingly, repeated misinformation encoding led to stronger misinformation effects when there was no or only one retraction (contrasts 4 and 8). After three presentations of misinformation, one retraction reduced reliance on misinformation (contrast 5), and three retractions reduced it further (contrasts 6 and 7), without, however, eliminating the continued influence of misinformation. Surprisingly, the effect of a single exposure of misinformation was reduced equally by one or three retractions; that is, in this case, three retractions failed to reduce the continued influence effect below the level achieved with one retraction (contrasts 1–3 and 9), and this level was significantly above that in the 0-MI control condition (contrast 0).

Excluding participants who did not acknowledge the retraction in the manipulation-check questions (n = 12, thus leaving between 18 and 21 participants per condition) did not change this pattern of results.

Recall

Mean recall rates varied between .68 (1-MI/1-R) and .78 (3-MI/3-R) across the six experimental conditions. A two-way ANOVA returned no significant effects, Fs < 1.5, ps > .2.Footnote 2

Awareness of retraction

Mean rates of acknowledgment across conditions ranged from .63 to .83. Although 3-R conditions yielded higher rates (.78) than 1-R conditions (.67), a two-way ANOVA yielded no significant effects, Fs < 1.1.

Discussion

Experiment 1 produced several noteworthy findings. First, in line with previous research, we found that even multiple retractions were insufficient to eliminate the continued influence of misinformation completely (cf. Bush, Johnson, & Seifert, 1994; Ecker et al., 2011).

Second, when misinformation was encoded once, there was a low but significant level of continued influence, and this influence was independent of the strength of retraction. In other words, after relatively weak encoding of misinformation, its influence was significant even if the retraction was strong. This corroborates research that has found it difficult to eliminate effects of misinformation, such as that of Ecker et al. (2010), who combined explicit warnings with the provision of a causal alternative but still found significant levels of continued influence after administering this combined manipulation.

Third, when misinformation was encoded three times but retracted only once, a relatively large continued influence effect was observed. Only repeated retractions were able to reduce this effect to the level elicited by one encoding of misinformation. The effectiveness of multiple retractions after strong encoding of misinformation does not support concerns that multiple retractions could enhance continued influence by increasing familiarity of the misinformation (Schwarz et al., 2007; Skurnik et al., 2005; see also Hintzman, 2010). It follows that the so-called backfire effects of retractions (Nyhan & Reifler, 2010; Pickel, 1995) may apply primarily to areas such as political beliefs or judicial settings, in which preexisting attitudes play a more important role for behavior.

The fact that there were no significant differences between conditions in fact recall and awareness of the retraction suggests that the differential pattern of the continued influence effect cannot be attributed to differences in overall memory strength.