Bots spread a lot of fakery during the 2016 election. But they can also debunk it.

Since the 2016 American election, there has been a lot of speculation about the role that bots played in spreading online misinformation. And now, that role has been quantified.

According to a study published in the journal Nature Communications today, automated Twitter accounts disproportionately amplified misinformation during the last U.S. election. It found that, while bots only accounted for about 6 percent of the Twitter users in the study, they were responsible for 34 percent of all shares of articles from "low-credibility" sources on the platform.

“This study finds that bots significantly contribute to the spread of misinformation online — as well as shows how quickly these messages can spread," said Filippo Menczer, a professor of informatics and computer science at Indiana University and the study’s lead, in a press release sent to Poynter.

Researchers analyzed 14 million tweets and 400,000 articles shared on Twitter between May 2016 and March 2017. To determine whether or not something was a low-credibility source, they drew upon resources from sites like (Poynter-owned) PolitiFact, which has compiled a list of websites that are known to spread false or misleading information online.

Those sources span from satirical sites like The Onion to full-blown fake news sites like USAToday.com.co. That’s a wide gap, but on social platforms like Twitter, the line between misinformation and satire is notoriously fuzzy — and users are divided over when one becomes the other.

To track how bots were amplifying misinformation from these sources, study authors used two tools from IU: Hoaxy and Botometer. The former is a platform that tracks the online spread of claims while the latter is a machine learning algorithm that detects bots on social media.

The study mostly compares distributions of bot scores from Botometer, which identifies bots based on thousands of other examples. The authors mitigated false positives and negatives by setting a threshold of 2.5/5, a score that Menczer said had the greatest degree of accuracy in their algorithm.

RELATED ARTICLE: How has fake news on Twitter changed since the 2016 election? Not much, report finds

Aside from their role in amplifying the reach of misinformation, bots also play a critical role in getting it off the ground in the first place. According to the study, bots were likely to amplify false tweets right after they were posted, before they went viral. Then users shared them because it looked like a lot of people already had.

"People tend to put greater trust in messages that appear to originate from many people," said co-author Giovanni Luca Ciampaglia, an assistant professor of computer science at the University of South Florida, in the press release. "Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them."

The study suggests Twitter curb the number of automated accounts on social media to cut down on the amplification of misinformation. The company has made some progress toward this end, suspending more than 70 million accounts in May and June alone. More recently, the company took down a bot network that pushed pro-Saudi views about the disappearance of Jamal Khashoggi and started letting users report potential fake accounts.

Nonetheless, bots are still wrecking havoc on Twitter — and some aren’t used for spreading misinformation at all. So what should fact-checkers do to combat their role in spreading misinformation?

Tai Nalon has spent the better part of the past year trying to answer that question — and her answer is to beat the bots at their own game.

“I think artificial intelligence is the only way to tackle misinformation, and we have to build bots to tackle misinformation,” said the director of Aos Fatos, a Brazilian fact-checking project. “(Journalists) have to reach the people where they are reading the news. Now in Brazil, they are reading on social media and on WhatsApp. So why not be there and automate processes using the same tools the bad guys use?”

In the lead up to last month’s election in Brazil, Aos Fatos built a Twitter bot that automatically corrects people who share fake news stories. Called Fátima, the automated account leverages AI to scan Twitter for URLs that match fact checks in Aos Fatos’ database of articles. Then, the bot replies to the Twitter user with a link to the fact check. (Disclosure: Fátima won the International Fact-Checking Network’s flash grant for Brazil.)

🤖 Oi! É falso que Ciro Gomes voltou da Europa e declarou voto em Bolsonaro. Ele já havia rejeitado o candidato. Veja os fatos: https://t.co/a8GF3uaCZU — Fátima (@fatimabot) October 28, 2018

Since launching Fátima over the summer, Nalon told Poynter that the bot has scanned more than 12,000 links and tweeted nearly 2,500 responses to a variety of users. Nalon said that’s important because not all tweeters who share misinformation are going to follow fact-checkers or even verified media organizations. Bots like Fátima ensure that all users have access to verified information, regardless of their own information silos.

“I think that technology can scale up our work. Our major challenge is to reach people that don’t have access to fact-checking,” Nalon said. “With Fátima, for instance … every time she tweets a link with an answer to someone, many people go there and like and say things to the people who shared the misinformation.”

Aos Fatos is one of the few fact-checking outlets to build a Twitter bot that automatically corrects misinformation. And Nalon said one of her goals for 2019 is to roll out the tool to more fact-checkers, starting with Chequeado in Argentina.

“What journalists need is to build ways of meditating, and we won't be mediating just by using the tools that Facebook and Twitter give to us. We have to build tools inside Facebook and Twitter and WhatsApp,” Nalon said. “I think that, if we’re raising awareness, we can raise trustworthiness too — and actually hack the way that people see bots.”