LONDON — Score one for the censors.

In the battle over what limits should be imposed on online free speech, regulators worldwide are on the offensive.

France has proposed banning so-called fake news during the country's future elections, while in Germany, new hate speech rules impose fines of up to €50 million on social media companies that don't delete harmful content within 24 hours of being notified.

The growing push to control what can be published online will again take center stage this week when the European Commission publishes its biannual report Thursday on how Facebook, Google and Twitter are handling the hate speech lurking in social media's darker nooks and crannies. (The likely outcome: EU policymakers will complain that companies aren't doing enough, and threaten them with more regulation.)

Not to be outdone, U.S. lawmakers are also getting in on the action, with Congress expected to rake tech executives over the coals Wednesday for dragging their feet when clamping down on extremist and terrorist material. (Congress already berated Big Tech last year for allowing Russian-backed content to be widely shared online during the 2016 U.S. election).

Let's not forget the role tech companies played in getting us here.

Freedom of speech advocates warn of an Orwellian digital dystopia where government apparatchiks dictate what we can read and write on the web. For those worried about online safety, the new rules will force tech companies to finally take responsibility for what is posted on their platforms, which have more users, collectively, than countries have citizens.

* * *

Whatever side you're on, these developments offer a glimpse at the future of the internet: one in which more online messages, videos and posts will be deleted because of legislative decrees or, more likely, preemptive censorship by tech companies that fear regulatory reprisal.

Call it the rule of self-preservation. Social media companies talk big about free speech. But if it's a choice between irritating free speech advocates by taking down a few arguably tasteless posts or facing furious politicians angry over online content, there's only going to be one outcome.

Don't just blame politicians for the coming era of online censorship. Let's not forget the role tech companies played in getting us here.

In outsourcing the monitoring of speech to tech companies, politicians are doing both Big Tech and their citizens a disservice.

For years, social media companies have hidden behind claims that they were merely owners of "neutral" platforms whose technology — and increasingly large profits — could not be held responsible for what was posted there.

It took extreme events (like Russia's involvement in the 2016 U.S. presidential election and the online bullying of recently arrived refugees in Germany) for Big Tech to admit the blindingly obvious: They are media companies. And that, just as traditional media companies are responsible for what they publish, social media companies are answerable, in the end, for what's posted on their platforms.

To be fair, Facebook, Google and Twitter have taken steps to combat the worst content offenders, using artificial intelligence to automatically block terrorist propaganda or hiring reams of so-called "content moderators" to manually check for illegal material online (arguably the worst job in tech). Mark Zuckerberg, Facebook's chief executive, even made "fixing" the social network his New Year's resolution for 2018.

And, to a degree, it's worked: Social media companies removed 59 percent of suspected hate speech across Europe last May compared with just 28 percent in December 2016, according to a recent EU-wide study. But that's like saying a city's fire department successfully put out just 59 percent of all suspected local fires — it's a good start, but nothing to write home about.

* * *

There's a more serious problem, though — one that lawmakers should acknowledge as they develop new censorship laws.

As strong as the case may be for expunging repugnant material, it can be difficult, if not impossible, to decide what social media posts are actually illegal, especially when the definition for illegality can vary between countries.

What's legitimate free speech to some represents harmful material to others. The recent (temporary) blocking of Beatrix von Storch, a far-right German politician, on Twitter after she posted an anti-Muslim message is just the latest example of tough decisions social media companies must now make to placate local lawmakers.

In outsourcing the monitoring of speech to tech companies, politicians are doing both Big Tech and their citizens a disservice.

Facebook, Google and Twitter may have more technical prowess and manpower dedicated to dealing with the problem. But these companies — whose quarterly earnings and investors' demands often run counter to governments' content policing plans — should not be the ones having to decide what can be allowed through digital safety nets.

In this new era of global online censorship, tough calls will have to be made between free speech and online safety, and elected officials, not opaque tech companies, must be the ones to judge what content crosses the line. If you're going to censor the web, you better make sure those doing so are accountable to voters.

Mark Scott is chief technology correspondent at POLITICO.