Authorities in India and Sri Lanka temporarily shut down mobile networks or blocked social media apps during riots and protests, claiming that the measures were necessary to halt the flow of disinformation and incitement to violence. In March, online rumors that Muslims were trying to sterilize Sinhalese Buddhists in Sri Lanka led a group of Buddhist men to beat a Muslim man and set fire to his shop. In the ensuing weeks, extremists used Facebook to implore followers to “rape without leaving an iota behind” and “kill all Muslims, don’t even save an infant.” Authorities reacted by blocking four social media platforms that they said were amplifying hate speech.

India leads the world in the number of internet shutdowns, with over 100 reported incidents in 2018 alone. Users in the state of Tamil Nadu shared a video showing a child being kidnapped by a masked motorcyclist on WhatsApp, along with an audio message warning that 200 “Hindi-speaking” child kidnappers were entering the state. The video was actually from a public-service announcement against child kidnapping in Karachi, Pakistan. Mobs killed at least two people and physically assaulted several others who were mistaken for kidnappers.

Shutdowns are a blunt instrument for interrupting the spread of disinformation online. By cutting off service during such incidents, governments often deny entire cities and provinces access to communication tools at a time when they may need them the most, whether to dispel rumors, check in with family members, or avoid dangerous areas. In practice, shutdowns serve as a substitute for more effective policymaking to counter online manipulation without disproportionate restrictions on freedom of expression and access to information.

Outsourcing Censorship to Social Media Companies

Even in democracies with a high level of digital literacy, it is often hard to distinguish between trusted sources from one’s own community and information created by a fake-news factory in Macedonia, a troll army in Russia, or an intelligence unit in Iran. Policymakers have focused their ire on tech companies for failing to keep fraudulent content off their platforms, or conversely, for taking down posts or curating news in a way that seems to privilege certain political leanings. US president Donald Trump—who popularized the term “fake news” as a smear for outlets that report critically on his policies—claimed in August that Google search results for the term “Trump News” are “rigged” to promote negative articles. Such controversies demonstrate the challenges faced by tech companies that are compelled to make difficult decisions about what constitutes appropriate speech. The task is especially fraught given that they lack the transparency, accountability, and public input associated with governmental or judicial decision-making in a democracy.

Some democracies have increased companies’ legal liability for third-party content appearing on their platforms, hoping that this will force them to police illegal speech. The European Union is currently mulling rules that would require social media companies to remove content that violates the laws of its 28 member states. The initiative came as Germany’s Social Media Enforcement Law (known as NetzDG) came into force last October, obliging social media platforms with over two million local users to monitor and remove “obviously illegal content” or face fines of up to €50 million. Dozens of different German laws contain provisions limiting certain forms of expression, from defamation of religion to depictions of violence. It is left to the companies to interpret these statutes and take action, affecting users without any due process or prior approval from a court. Imposing similar requirements on tech companies across the EU would likely result in greater confusion and missteps that could unduly harm freedom of expression.

Protections against intermediary liability are also eroding on the other side of the Atlantic. There is ongoing pressure in the United States to rescind “safe harbor” protections in Section 230 of the Communications Decency Act. Without the provision, companies that make mistakes when attempting to remove banned content could be held liable for allowing illegal activities on their platforms, encouraging them to err on the side of censorship rather than protecting legitimate expression.

The Promise of Broad Collaboration to Counter Disinformation

More constructive solutions arise out of collaboration among civil society groups, governments, and tech companies. Italian lawmakers have partnered with journalists and tech firms to pilot a nationwide curriculum on spotting online manipulation. In the US, several states have passed or proposed laws to increase media literacy programs in local schools. The civic education initiatives include efforts to teach students to evaluate the credibility of online media sources and identify disinformation. Many of the laws require state education officials to engage with media literacy organizations in the creation of their curriculums, and are based on model legislation backed by civil society experts. WhatsApp, which is owned by Facebook, is working together with seven organizations in India to draft a digital literacy training program for its users.

Social media companies are also working with civil society to identify disinformation on their platforms. Facebook’s collaboration with DFRLab at the Atlantic Council in the United States led to the discovery of fake accounts controlled by entities in Russia and Iran. Comprova, an initiative by the nonprofit First Draft and the Brazilian Association of Investigative Journalists (ABRAJI), brings together 24 Brazilian news outlets to identify and counter disinformation ahead of the country’s elections. The project marks the first time a journalists’ association has been granted access to WhatsApp’s business API (application programming interface), which will improve the group’s ability to reach audiences on the platform. Through a partnership with Facebook, the Argentinean organization “Chequeado” runs a bot that automatically matches media claims on the network with fact-checking research.

These examples of cooperation show how government, civil society, and tech companies can play a productive and healthy role in protecting the digital sphere from manipulation. Governments should use caution when asking the private sector to perform a task that they are unwilling and unable to perform themselves: Proactively or preemptively assessing the legality of billions of online posts would require massive additional resources and constitute a worrying intrusion of the government into social media, where the line between public and private communication is often blurred. But forcing private companies to do the same—without proper safeguards—can also damage individual rights, reducing transparency and due process while allowing public officials to shift the blame for any abuses.