For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter.

Millions of tweets were flying furiously in the final days leading up to the 2016 US presidential election. And in closely fought battleground states that would prove key to Donald Trump’s victory, they were more likely than elsewhere in America to be spreading links to fake news and hyperpoliticized content from Russian sources and WikiLeaks, according to new research published Thursday by Oxford University.

Nationwide during this period, one polarizing story was typically shared on average for every one story produced by a professional news organization. However, fake news from Twitter reached higher concentrations than the national average in 27 states, 12 of which were swing states—including Pennsylvania, Florida and Michigan, where Trump won by slim margins.

While it’s unclear what effect such content ultimately had on voters, the new study only deepens concerns about how the 2016 election may have been tweaked by nefarious forces on Twitter, Facebook, and other social media. “Many people use these platforms to find news and information that shapes their political identities and voting behavior,” says Samantha Bradshaw, a lead researcher for Oxford’s Computational Propaganda Project, which has been tracking disinformation strategies around the world since 2014. “If bad actors can lower the quality of information, they are diminishing the quality of democracy.”

Efforts by Vladimir Putin’s regime were among the polarizing content captured in the new Oxford study. “We know the Russians have literally invested in social media,” Bradshaw told Mother Jones, referring to reports of Russian-bought Facebook ads as well as sophisticated training of Russian disinformation workers detailed in another recent study by the team. “Swing states would be the ones you would want to target.”

The dubious Twitter content also contained YouTube videos—including some produced by the Kremlin-controlled RT network, which were uploaded without any information identifying them as Russian-produced.

The dubious Twitter content in the new study also contained polarizing YouTube videos–including some produced by the Kremlin-controlled RT network, which were uploaded without any information identifying them as Russian-produced. All the YouTube videos have since been taken down, according to Bradshaw; it’s unclear whether the accounts were deleted by the users, or if YouTube removed the content.

The Oxford researchers captured 22 million tweets from November 1 to November 11 in 2016, and they have been scrutinizing the dataset to better understand the impact of disinformation on the US election. The team has also analyzed propaganda operations in more than two dozen countries, using a combination of reports from trusted media sources and think tanks, and cross-checking that information with experts on the ground. Their recent research has additional revelations about how disinformation works in the social-media age, including from Moscow.

Putin’s big investment in information warfare

In studying Russia’s propaganda efforts targeting both domestic and international populations, the Oxford researchers found evidence of increasing military expenditures on social-media operations since 2014. They also learned of a sophisticated training system for workers employed by Putin’s disinformation apparatus: “They have invested millions of dollars into training staff and setting targets for them,” Bradshaw says. She described a working environment where English training is provided to improve messaging for Western audiences: Supervisors hand out topical talking points to include in coordinated messaging, workers’ content is edited, and output is audited, with rewards given to more productive workers.

The battle to identify bots

One telltale sign of bots stems from a group of accounts that tweet much more frequently than typical humans—or accounts that tweet on exact intervals, say, every five minutes. The bot-driven accounts may lack typical profile elements such as profile pictures (see also: the generic Twitter egg) and often don’t engage in replies with other social-media accounts. In addition to spreading fake news, “they can also amplify marginal voices and ideas by inflating the number of likes, shares and retweets they receive, creating an artificial sense of popularity, momentum or relevance,” the Oxford team reported recently.

While it’s difficult for researchers to untangle how many Twitter bots are Russian-controlled, they regularly see Russian accounts in the mix: For example, on Twitter, they found accounts following Trump that tweeted most frequently during Russian business hours and switched regularly between English and Cyrillic.

About one-fifth of campaign-related tweets during the month before the election likely were generated by bots, one study found.

On Facebook, it’s much more challenging to sort out which content is bot-driven, says Bradshaw. That’s in part because on Facebook, bots typically operate pages or groups, which can be even more opaque than individual accounts.

The presence of bots during the election homestretch

The Oxford researchers also found that bots infiltrated the core conversations among their Twitter data during the election period—and several of their analyses revealed that bots supported Trump much more than Hillary Clinton. A separate research effort by Emilio Ferrara at the University of Southern California, cited in Oxford’s report, determined that about one-fifth of campaign-related tweets during the month before the election likely were generated by bots. Ferrara’s team recorded 4 million tweets during that time period posted by about 400,000 bots.

How Germany fought off the fake-news scourge

In the days before the September 24 parliamentary election, the Oxford researchers found that political bots were minimally active on Twitter in Germany. The most tweets tracked were in support of the far-right Alternative für Deutschland (AfD) party, which won 13 percent of the vote and became the first far-right party to earn a presence in Parliament in 60-plus years. The research also found that Germans were much less likely to share fake news stories than their American counterparts, sharing links from professional news organizations four times as often as links from sites pushing fake news. Researchers theorize that voters in Germany and other parts of Europe may have been inoculated to the effects of bot-driven fake news, thanks to the ongoing fallout from 2016. “I would speculate the Russians overplayed their hand in the US elections,” Bradshaw says. “Voters in the US weren’t really prepared, but that was part of the discourse in other countries like Germany.”

But the battle is only beginning. In the hands of bad operators “the bots get a bit smarter,” Bradshaw says. When those controlling them realize that the bots are being tracked, for example, they may adjust the frequency that they tweet in order to fly below researchers’ radar. Bradshaw also notes that voice-simulation technology combined with video-simulation technology is making it increasingly possible to create fake news—say, a video showing politicians making statements that they never actually said. “In innovations in technology,” she cautions, “the attackers always have the advantage.”