It may seem reasonable for on-air talent to fill airtime with speculation and predictions, but it’s more difficult for us to correct that sort of misinformation later on.

In March 2014 Americans exploded with snark in response to CNN’s on-air speculation about the missing Malaysia Airlines flight—speculation that implied a role was played by black holes or supernatural occurrences. Less than six months later, CNN again became the target of collective ire when it aired a segment comparing Ebola and ISIS—a segment that implied ISIS could intentionally infect agents and the spring them on mass transit systems in the United States. (Spoiler: they can’t.)

Although these segments are absurd, at their core they are not atypical. They are merely the most egregious examples of the prevailing tendency for on-air news personalities to speculate or imply that various events have certain causes or consequences. If not missing airplanes and ISIS, then it’s the momentary absence of Vladimir Putin or the motives of literally anybody who commits a newsworthy murder.

On its face, this speculation seems like reasonable behavior for networks and websites desperate to fill the content void. If you don't know for sure, why not have “experts” talk about what it could be?

People are resistant to corrective information that runs counter to their ideological preferences. They may also fail to use the corrective information if it contradicts deep-rooted beliefs.

Well, psychologists have finally provided a compelling response. A new study led by Patrick Rich of Kent State University finds that is it harder to correct beliefs based on misinformation that has merely been implied compared to misinformation that has been explicitly stated. To simplify, if Don Lemon says, "We lack a scientific or natural explanation for what happened to the plane," and then evidence comes to light that there was, in fact, a standard scientific explanation, it would actually be harder to correct beliefs about the role of the supernatural than if Lemon had explicitly declared, “A supernatural force caused the plane to disappear.”

Rich and his colleague Maria Zaragoza aren’t exactly sure why information that’s been implied is harder to dislodge than information that’s explicitly stated. One theory is that when there is implication or speculation—rather than an explicitly stated outcome—people have to fill in the blanks. This leads to additional cognitive processing that produces more, or stronger connections between the false information and existing knowledge, with the result being that the false information becomes harder to replace. Another theory is that corrective information (“The plane didn’t fly into a black hole”) is congruent with the explicit statement it corrects (“The plane flew into a black hole”), and thus it’s easy to replace the latter with the former. But when information is implied, the corrective information doesn’t map onto the false information so smoothly, and thus it’s harder for the replacement to occur.

Rich and Zaragoza’s study used an experimental paradigm in which a home robbery is described and participants report their beliefs about the degree to which the homeowner’s son was involved. In the “implied” condition, participants read information that implicated the son (e.g. he was watching the house while his parents were gone, there were no signs of forced entry, the son had gambling debts, etc.) In the “explicit” condition, participants read an additional piece of information stating that police believed the son was a suspect. In a third condition, the control, the son was mentioned but not in a way that implicated him in the crime.

The key manipulation came at the end of the story. Half the participants read one final piece of corrective information—that police eventually confirmed the son was actually out of town when the robbery occurred and therefore could not have committed the robbery. After completing a 20-minute distractor task participants answered a series of questions about the robbery that gauged their belief in the son’s involvement.

Rich and Zaragoza found that, among participants who saw the corrective information, those in the implied condition maintained a stronger belief in the son’s involvement than participants in the explicit and control conditions. In a follow-up study, information was added to the corrective condition that stated the actual thief was caught, thereby providing an alternative explanation that left no doubt about the son’s innocence. Once again, participants in the implied condition who received the corrective information reported a stronger belief in the son’s involvement than participants in the other conditions. Additional analyses ruled out the possibilities that participants in the implied condition were less likely to believe the corrective information, less able to remember the corrective information, or initially more likely to believe in the son’s involvement (before receiving the corrective information).

Taken together, the studies paint a damning picture of a behavior at the core of how media organizations deliver content. Granted, it’s possible that information that is implied tends to end up being accurate, thereby helping people hold on to correct information. Nevertheless, the studies show the drawbacks of a professional focus that only seeks to stigmatize the explicit reporting of false information.

The studies also add to the growing pile of research on our troubles with correcting false beliefs. People are resistant to corrective information that runs counter to their ideological preferences, for example. They may also fail to use the corrective information if it contradicts deep-rooted beliefs, as is the case with intentions to vaccinate. Misinformation can also be difficult to correct if it has been widely repeated because familiar information is processed more easily. This ease causes the information to “feel right,” which makes it more likely to be deemed true (though the practical effect of this familiarity is small).