For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter.

Facebook and Twitter failed the first big disinformation test of the 2020 elections.

Hours before this year’s 2020 Democratic primary season officially kicked off with Monday night’s Iowa caucuses, a right-wing conspiracy claiming voter fraud in Iowa began rippling across conservative Twitter and Facebook communities, all stemming from a “report” by Judicial Watch.

They again failed what should have been a relatively easy test.

The right-wing activist group alleged that “Eight Iowa counties have more voter registrations than citizens old enough to register.” (Iowa Secretary of State Paul Pate, a conservative Republican who has pushed for strict voter ID laws, quickly debunked the claim.) But over the next several hours, Judicial Watch’s claim about voting irregularities went viral anyway. Tom Fitton, the organization’s president, wrote a tweet about the claim that got 6,000 retweets; one from Fox News’s Sean Hannity earned another 4,000. But it really caught traction when Charlie Kirk—the leader of the youth-oriented, pro-Trump group Turning Point USA—repeated the claims without attribution, accruing over 40,000 retweets.

As the debunked information was spiraling out of control, Facebook and Twitter were doing very little, very slowly, to stymie it.

Facebook began showing warnings to users trying to share the Judicial Watch report and to users trying to share other stories citing the conservative activist group. But they only did so hours after the claims had gone viral. Judicial Watch was also able to buy a Facebook ad promoting a post about the report, which the company left up for several hours.

Twitter told reporters that tweets promoting Judicial Watch’s conspiracy claims, which seemed designed to erode faith in Iowa’s process, did not violate its policies against election-related misinformation because they didn’t “suppress voter turnout or mislead people about when, where, or how to vote.”

While the false claims about voter registration died down slightly with the evening start of the caucuses, the delayed reporting of the results, spurred by questions about a mobile app that was supposed to help tally the results, spurred a second wave of right-wing election conspiracies.

President Donald Trump’s 2020 reelection campaign manager, Brad Parscale, alongside others from the campaign and the Republican National Committee, began saying or suggesting that the delay was evidence the election was rigged. Their baseless claims were also widely shared and retweeted.

After 2016, technology companies had promised to be ready, repeatedly saying that they would work to avoid the same mistakes that they made during the 2016 election. That cycle was plagued with disinformation, coming from from both shady, small businesses trying to put out fake divisive content to rake in ad money, as well as a Russian state-sponsored operation to influence the election—both of which spread completely unmitigated by tech companies.

Despite such assurances, they again failed what should have been a relatively easy test. The disinformation spread from a handful of visible, high-profile sources. They used tactics that could have been anticipated, as they have been deployed in the past when conservative users whipped up conspiracies that a 2018 caravan of migrants bound through Mexico for the U.S. was a Soros funded operation, and that bomb scares targeting liberal elites were a false flag. Facebook and Twitter have had time to observe, learn, and develop ways to respond—but they somehow didn’t.

Both companies say they’ve been working hard to create technical mechanisms to spot and stop nefarious groups trying to manipulate their platforms to spread political misinformation. And in certain related areas, they’ve been relatively effective. Facebook and Twitter, with a few exceptions, appear to be proficient in identifying and taking down foreign and for-profit influence networks.

“It’s about protecting the company’s interests.”

This suggests that technological solutions to enforce many of the rules on their platforms can work. But the social media giants still haven’t figured out how to meaningfully update their policies to establish the frameworks regarding the types of content they target.

And as it stands, those policies leave massive loopholes that U.S. citizens acting in bad faith can take advantage of—and often do. Facebook has clearly and publicly been unwilling to scrutinize or reassess such gaps. During an insipid and circumspect speech at Georgetown in October, Facebook CEO Mark Zuckerberg essentially spent the entirety of his remarks reiterating that his company has no interest in changing the anything-goes aspects of its philosophy or making structural changes that would alter the flow of information on Facebook, even if it means misinformation can spread freely. Jack Dorsey, the CEO of Twitter, has expressed similar sentiments.

There’s likely one main reason for this. Dipayan Ghosh—an academic at the Shorenstein Center on Media, Politics, and Public Policy and a former employee on Facebook’s Washington, D.C.-based privacy and public policy team—says his old colleagues don’t want to rock the boat with conservatives in a way that would threaten the corporate bottom line. “It’s about protecting the company’s interests in the face of conservatives in this country,” he says.

Facebook maintains its decisions about political content and advertising aren’t motivated by profit because political advertising makes up a very small percentage of its total revenue. That latter claim is true, but Ghosh says its misleading: the business worry isn’t that conservatives will no longer advertise on Facebook, it’s the “concern that Trump and Republicans could turn on these companies and try to regulate them.”

This would explain why platforms are now comparatively so much better at handling foreign misinformation. The repercussions are so much smaller because both Republicans and Democrats have expressed opposition to foreign political influence on the internet.

Until the social media platforms are willing to call out the Sean Hannitys, the Charlie Kirks, or anyone else involved in U.S. politics who abuses their platforms, they will be making their products available to spread falsehoods targeting residents of their own country. Unless that changes, no matter how good their anti-disinformation technology becomes, what happened on Monday will happen again.