A combination of established psychological theories of human behavior applied to an analysis of today’s social media suggests an uncomfortable reality: human nature plays a significant role in the existence of fake news. We hear a lot about bots and automated networks. But it isn’t just automation that is to blame. People are innately wired to respond to repetition. Today these same natural inclinations lead people not just to believe the unbelievable, but propel them to be part of the engine that drives it.

Intro: Repetition has Always Been Attractive to Humans

People are naturally predisposed to respond to repetition. In the 1960’s, advertisers listened to psychologists and began to develop practices around what they learned about human behavior. The results produced concepts that are considered to be the core principles of advertising and are still in use today.

Robert Zajonc’s Mere Exposure Effect showed that people have a tendency to develop a preference for something simply because they are familiar with it. Repetition, or repeated exposure to a stimulus, can make anything– products, images, taglines and claims– more believable because it breeds familiarity. If nothing else repetition makes things appear to be important and memorable.

Repeated exposure can also influence a person’s perception of truth. The Illusory Truth Effect says that people will believe information is correct after frequent, repeated exposure to it– not after fact checking or credibility analysis, after “frequent, repeated exposure.” Very much still in practice today, the Rule of Seven states that potential customers need to hear a pitch at least seven times before they will take action. Vladimir Lenin famously said “A lie told often enough becomes the truth.” Apparently, “often enough” equals seven.

Computational Repetition and Social Media

Web 1.0 empowered people to produce content without judgement by editorial gatekeepers. It was a glut of disparate, unconnected ideas on an unlimited open-source platform. The advent of Web 2.0 and social media in the early 2000’s connected the ideas and the people who believed in them and removed the remaining technical knowledge barriers to publication. Social media is largely seen as a defining component of Participatory Culture, a social climate in which the public is not just a consumer of media but a producer of it.

Platforms like Twitter and Facebook certainly provide a novel application of Zajonc’s ideas. But the effects appear be similar. Today human repetition often occurs in social media in the form of reading and sharing a post containing a concept, link or photo. Retweeting the content increases exposure to stimuli; repeated exposure breeds familiarity; people perceive something familiar as true. Social media also gives disseminators of information the ability to increase the Rule of Seven exponentially, since a single post can hit seven people in well under an hour. Use a bot and it’s a matter of fractions of seconds. And since the algorithms that govern your social platform feeds are designed to show you what you are interested in, you can expect the same and similar content to keep appearing in your own feed after you repost something.

Repetition and Fake News

A recent study by researchers at MIT on how news spreads online reveals surprising results concerning human behavior on social media. Computer scientists’ investigation and analysis of data may provide new insight on deeply contentious issues such as regulation of social media platforms and help us understand the origins and attractiveness of fake news. Putting their results together with a basic understanding of human behavior and repetition specifically reveal why people –not bots– are more likely to spread fake news.

“Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”

People may not be as fast as automated networks, but they can still have an impact. The MIT study also traces the circulation of retweets in “rumor cascades.” A campaign begins with a Tweet that asserts a point about a topic and is then repeated in subsequent retweets. Falsehoods also have more staying power, lasting through multiple cascades or retweeting.

The concept that anything new –be it a shirt, a car or information– is more popular applies here too. People want to be popular. Their desire to send something novel trumps their interest in sending something true. They are more likely to repost novel information than take the time to fact-check it; they may believe the time it takes to check facts may render the information stale. Social media mavens won’t devote a lot of time to decide whether they should repost a link they may not know is untrue. The study goes on to assert that the truth took “approximately six times as long as falsehood to reach 1,500 people.” Not only are people taking advantage of the power of repetition, they are unknowingly spreading fake news in the process.

Further analysis recently produced by New York and Princeton Universities indicates people older than 65 share the most fake news. This has nothing to do with political preferences. Age predicted their actions more consistently than any other trait. Researchers attribute this to a lack of digital literacy and cognitive decline. Many older people have an almost febrile interest in giving the appearance of staying relevant, and will share a post in an effort to communicate to others “I know what’s going on.’

Repetition…or Wildfire?

The human tendency to believe something that is repeated via social media and to repeat or repost content fuels the fake news engine. The consequences of this could have damning results. Real-world, time-sensitive events such as natural disasters make people post and repost quickly, without checking information. Humans –not bots– with the simple agenda of wanting to be involved, or of only wanting to help often get caught up in a larger problem of disseminating inaccurate information that won’t help anyone. What can we really do to help? If we want to prevent and eventually eradicate fake news, it makes sense to understand its development. Doing so requires attention to human behavior as well as a technical grasp on how machines and algorithms can amplify it.