The indictment of 13 Russian nationals for meddling in the 2016 U.S. election has reignited the debate on whether Facebook and other social-media giants are a threat to democracy. While the electoral impact of Russian trolls who amplified anti–Hillary Clinton memes can be disputed, there’s a strong argument that social media has had a toxic effect on American society, driving polarization and creating paranoia. The wildfire spread of conspiracy theories that grieving high school students in Parkland, Florida, are “crisis actors” is only the latest example of how social media can poison public discourse.

The corrosive effect of social media on democratic life has led both French President Emmanuel Macron and Canadian Prime Minister Justin Trudeau to make the same threat to Facebook: self-regulate or be regulated. Last month, Macron proposed a new law to accomplish as much. “When fake news are spread, it will be possible to go to a judge … and if appropriate, have content taken down, user accounts deleted and ultimately websites blocked,” Macron said. “Platforms will have more transparency obligations regarding sponsored content to make public the identity of sponsors and of those who control them, but also limits on the amounts that can be used to sponsor this content.”

Macron’s idea is promising, but falls short. If fake news truly poses a crisis for democracy, then it calls for a radical response. Instead of merely requiring greater transparency of social media and empowering the courts to ban users and website—the latter being a slow, time-consuming, and ultimately Sisyphean solution—perhaps governments should outright ban Facebook and other platforms ahead of elections.

A model for this already exists. Many countries have election silence laws, which limit or prohibit political campaigning for varying periods of time ranging from election day alone to as early as three days before the election. What if these laws were applied to social media? What if you weren’t allowed to post anything political on Facebook in the two weeks before an election?

In 2017, Facebook experimented with flagging fake-news items, but abandoned the idea in December. “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs—the opposite effect to what we intended,” according to Product Manager Tessa Lyons. Even YouTube’s decision to remove the video smearing Marjory Stoneman Douglas High School student David Hogg has a downside because, CNN’s Brian Stelter argues, “the notion that a giant corporation took down the video plays right into the hands of conspiracy-mongers who say they’re being censored.”

