Evidence continues to trickle out about how Russians and potentially other bad actors used Facebook to spread disinformation during the 2016 campaign. The latest piece of information: a new study that found that more than half of the sponsors of Facebook ads that featured divisive political messages ahead of the 2016 election were from “suspicious” groups with little or no paper trail to identify them.

And it gets worse: One in six turned out to be linked to the Internet Research Agency, the Kremlin-linked troll farm responsible for much of Russia’s disinformation campaign in the US.

Earlier this month, Facebook CEO Mark Zuckerberg announced ahead of his congressional testimony that the company would require political advertisers to verify who they are and where they’re from. Facebook also came out in support of the Honest Ads Act, proposed legislation that would regulate online political advertising much in the same way as television, radio, and print.

The study found that ads on hot-button cultural issues, from dark money groups and others who don’t have to report on their spending, were targeted to voters in swing states. The study doesn’t speculate about whether those ads ultimately affected the outcome of the election. But it’s the latest evidence that Facebook political advertising is a murky world where little disclosure is required about who’s paying for the ads shown to voters across the country.

“Suspicious” ad buys on Facebook are shockingly common

Young Mie Kim, a professor of journalism at the University of Wisconsin Madison and the study’s lead author, and a team of researchers analyzed 5 million paid ads shown to a group of 9,519 individuals who model the US voting-age population in the weeks leading up to the 2016 election, from September 28 to November 8, 2016.

From that pool, they took a random sample of 50,000 paid ads and searched for any discussing eight hot-button political issues: abortion, LGBTQ issues, guns, immigration, nationalism, race, terrorism, and candidate scandals. The researchers examined both whom the ads were coming from and whom they were targeting.

Their findings: Of the 228 groups that bought ads about those issues before Election Day, 121 were identified as “suspicious,” meaning researchers couldn’t find a trace of them in federal records or online. The Facebook pages they ran, for example, were deleted or went inactive after Election Day, and there are no IRS records to identify them. “We were not able to identify who they are,” Kim told me.

Moreover, one-sixth of the suspicious advertisers turned out to be Kremlin-linked groups. The researchers matched them up with Russian-linked ads released by the House Intelligence Committee last fall.

“I expected that we would find some unknown actors in the digital media political campaign landscape, because there are some regulatory loopholes,” Kim said. “The findings are a lot worse than I thought. It is shocking and surprising.”

The suspicious and unregistered ads were targeted at — you guessed it — swing states

The study, released in conjunction with watchdog groups Issue One and the Campaign Legal Center, compares political and issue ads registered with the FEC to ads run by groups that didn’t file with the FEC — suspicious groups, Russian groups, and activist and other dark money groups that didn’t file. They found that the unregistered ads were disproportionately targeted at battleground states, including those that ultimately swung the election to Donald Trump over Hillary Clinton — Pennsylvania and Wisconsin.

Race-related ads were most targeted to North Carolina and Wisconsin, as well as Indiana, Kentucky, and Missouri. Terrorism-related ads were disproportionately served to voters in Michigan, North Carolina, and Wisconsin, as well as New Jersey. Abortion was the only campaign ad that was disproportionately served to Arkansas voters.

Low-income voters — defined as households with incomes of less than $40,000 a year — were targeted with Facebook ads about immigration and race. Middle-income voters — income in the $40,000 to $120,000 range — saw more Facebook ads than average on nationalism.

There were also racial divisions. Eighty-seven percent of immigration ads and 89 percent of nationalism-related ads were targeted to white voters.

“Our results clearly demonstrate that data-driven political campaigns are adopted not just be resourceful political parties … but also relatively low-resourced groups,” the researchers wrote in a discussion of their results.

The study doesn’t delve into whether the ads had any impact on voters’ decisions. In fact, whether Russian-linked ads and disinformation swayed a significant number of voters in the 2016 election remains largely unknown. As New York magazine’s Brian Feldman pointed out last fall, millions of people saw the Russian-linked ads, but we don’t know if what they saw really sank in. But Facebook does brag about the effectiveness of its ability to sway voters when trying to sell ads.

No one knows who’s behind so many of Facebook’s political ads because legally, it’s not required

The study reinforces that policing political ads online is far behind television, radio, and print.

Most spending on digital ads that only discuss issues but don’t explicitly advocate for or against a candidate — even if it mentions them — isn’t reported to the Federal Election Commission. In this case, groups that had never filed a report with the FEC placed four times as many ads as registered political committees or other groups that had.

Nine percent of the ads the researchers identified should have been subject to disclosure requirements under current law as they advocated for or against a candidate. Had they been on television or the radio, 25 percent of the ads identified would have been subject to FEC disclosures.

The Federal Election Commission is contemplating amending its rules on for disclaimers on political communications, including advocacy and fundraising, online. In late March, it put out two alternative proposals on the matter. Most experts and observers I’ve spoken with, however, don’t have much faith in the FEC’s ability to police online ads under current law.

Sens. Amy Klobuchar (D-MN), Mark Warner (D-VA), and John McCain (R-AZ) last October introduced the Honest Ads Act, which seeks to regulate online political advertising much in the same way television, radio, and print are. Facebook and Twitter in recent weeks have come out in support of it, but the legislation has largely stalled.

Twitter is pleased to support the Honest Ads Act. Back in the fall we indicated we supported proposals to increase transparency in political ads. — Twitter Public Policy (@Policy) April 10, 2018

Facebook and other tech platforms could be subject to know-your-customer laws, along the lines of laws and regulations that require banks to know whose money they’re dealing with and where it’s coming from and going. And there are a couple of legislative proposals in California that would clamp down on bots.

Facebook has a lot of problems that need fixing

This study’s findings are yet another example of the multitude of problems Facebook faces in policing and understanding its own platform. The social media giant has come under scrutiny not only over Russian use of the platform in its political disinformation campaign but also in the wake of the Cambridge Analytica scandal, which exposed major faults in its data privacy and protection practices.

And the drip, drip, drip continues: Facebook earlier this month suspended dozens more Russian-linked accounts and admitted the Cambridge Analytica could have involved the data of 87 million accounts. Brittany Kaiser, a former Cambridge Analytica employee, told British lawmakers on Tuesday that she believes the amount of people affected could be “much greater” than 87 million.

European lawmakers have been rather aggressive in taking on Facebook and the other tech giants. The United States has just started to take baby steps, with the Senate and House of Representatives hosting committee hearings with Zuckerberg last week. But US lawmakers seem to have no idea where to start on regulating Facebook — even in the face of mounting evidence, including this suspicious ads study, that the platform has some pretty serious issues.