Fake news and viral hoaxes spread online because of our short attention spans and the deluge of new information that’s constantly pouring into social media, a new study says. But reining in the bots that spit out huge numbers of posts could help curtail the information overload that makes it hard to sort fact from fiction, the study authors say.

The truth doesn’t always prevail online

Social media’s amplification of online misinformation emerged with a vengeance during the 2016 presidential election, although it’s still not clear exactly how much “fake news” influenced the election’s outcome. Still, with the majority of American adults getting their news from social media, understanding what makes false or mistaken ideas spread online has high, real-world stakes.

That’s why a team of researchers from Indiana University, Shanghai Institute of Technology, and Yahoo Research set out to determine why the truth doesn’t always prevail online. They published their findings today in the journal Nature Human Behaviour.

Their first step was to create a simulated, and simplified, social network. In this network, each virtual user — not an actual person — sees a reverse chronological feed, kind of like the one on Twitter. Then the researchers introduced a few variables, including the number of new messages or posts that flow into the user’s feed, how much attention that user devotes to scrolling through the posts, and the quality of the ideas or memes in each post. The “quality” variable is a little tricky, because it can mean different things for different media: for a picture, it could be its beauty, and for a claim or a statement, its accuracy.

By tracking 100,000 different posts across 20 different simulations, the researchers learned that generally, higher-quality ideas — more beautiful photos, or truer statements — are better at spreading through the network. But if the social network is constantly deluged by new posts and the users don’t have infinite attention spans (which, we don’t), the group loses its ability to discriminate between good and bad ideas. Basically, for high-quality posts to win the sharing war on social media, the volume of new information flowing into that network has to be pretty low, and the users’ attention spans have to be pretty high.

An online marketplace of ideas “that is incapable of discriminating information on the basis of quality.”

Now, this is a simulation — and it makes some pretty oversimplified assumptions. So to add real-world numbers, the team analyzed Twitter data from 2014 to understand how fast new ideas are added to the network, and how many new posts users tweet versus how many they retweet. From Tumblr, the researchers calculated users’ average scrolling behavior, and how long they lingered on posts. Plugging this info into their model painted a frightening picture of an online marketplace of ideas “that is incapable of discriminating information on the basis of quality,” the authors write.

Again, this is a model, and more studies will need to be done to verify whether people actually do behave this way. But real-world sharing behavior at least initially appears to back up the results: posts determined to be hoaxes by rumor-checking site Emergent.info spread just as virally as posts that fact-checked those hoaxes or shared true information.

Cutting down on information overload could help people distinguish real from fake news on social media, and one way to do that could be for companies like Facebook and Twitter to remove the bots that churn out lots of low-quality posts. But as Facebook’s strategy for dealing with disinformation revealed earlier this year, that’s no small task.