Digital Politics is a column about the global intersection of technology and the world of politics.

Facebook clamped down on online coronavirus rumors and banned ads that promoted the sale of medical face masks. Google flooded people's search results about the growing pandemic with government alerts and removed YouTube videos urging people not to get treated. Twitter highlighted official reports about what to do when showing symptoms and demoted crazy conspiracy theories.

In recent days, the world's largest social media platforms have been pulling out all the stops to combat the wave of false reports, hacking attempts and outright lies that have spread like wildfire about COVID-19.

It hasn't worked.

A quick search across these platforms still brings up reams of misinformation — just as the number affected by the disease hits more than 120,000, with roughly 4,300 deaths worldwide.

People are sharing rumors, fake stories and half-truths about COVID-19 with each other directly across the likes of Instagram and Twitter.

For once, this is not a failure of Big Tech to clamp down on sophisticated — and coordinated — online campaigns to spread fake news. So far, there's no evidence that a group of state-backed accounts is actively promoting any of these coronavirus-related falsehoods to the wider public.

Instead, people are sharing rumors, fake stories and half-truths about COVID-19 with each other directly across the likes of Instagram and Twitter as they struggle to understand how best to protect themselves and their families.

That's proving to be a serious problem.

Big Tech and government agencies have created task forces to fight coordinated misinformation campaigns. But they are relatively powerless to clamp down on this sort of grassroots, user-created misinformation that has become the bread-and-butter for how falsities spread across social media as fast as the virus itself is jumping from country to country.

Tech companies and policymakers are finding that the tens of millions of euros and dollars they have spent to detect, monitor and combat sophisticated digital misinformation campaigns have little effect when regular social media users, and not foreign governments, are the ones spreading falsehoods.

So much time has been spent on tackling coordinated, state-backed campaigns (everything from Russian-backed efforts in the 2016 U.S. presidential election to last year's European Parliament election) that finding ways to stop misinformation created by real-life users from going viral is proving to be a daunting task.

In part, that's because social media companies have rightly staked out their position as purveyors of free speech.

Their networks, tech executives like to say, give voice to the little people, allowing communities to connect online in ways that would be impossible in the real world. It's not up to companies, they add, to determine what should, and should not, be put online.

But that strategy — one that has ruffled feathers with both totalitarian governments and democratic countries — also means that it's difficult, if not impossible, to throttle social media posts from individuals, as well as politicians like Donald Trump, when they're promoting false claims and outlandish views on COVID-19.

It's not for lack of trying.

It's hard to stop misinformation spreading when users are now the engine for how these false claims are spread.

Google, whose YouTube platform has been criticized for failing to stop hate speech, climate change denial and other forms of misinformation, banned some coronavirus-related apps from its smartphone store and blocked people looking to make money from the pandemic from buying ads on its digital networks.

Facebook, too, has tweaked its algorithms to promote official accounts for its 2.4 billion users and removed false content about coronavirus in ways that, previously, the social networking giant said it never would. Watch out for lawmakers pushing the companies to keep these tactics on other hot-button content issues when the coronavirus pandemic eventually ebbs away.

But despite the resources these companies are throwing at the problem, it's hard to stop misinformation spreading when users — people who have been urged by these companies over years to share every detail of their daily lives online — are now the engine for how these false claims are spread.

Social networks, by their very nature, are social. So with people from Brussels to Boston now fretting about how to respond to the growing threat of COVID-19, it's no surprise that Big Tech is finding itself with few levers to pull to counter the growing levels of misinformation when it is users, and not state-backed, coordinated campaigns, who are driving these online falsehoods.

After years of creating these digital echo chambers, it's almost impossible now to switch them off.

Mark Scott is chief technology correspondent at POLITICO.

This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email pro@politico.eu with the code ‘TECH’ for a complimentary trial.