Changes envisaged are far-reaching, making intermediaries accountable for content on their platforms; no longer eligible for safe harbour protection

Almost everyone concerned with keeping the digital landscape free and secure has written to the Indian government. Watchdogs of Internet freedom, security experts, encryption specialists, academics and even human rights organisations have all been petitioning Ravi Shankar Prasad, who heads the Ministry of Electronics and Information Technology (MeitY), against the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018.

The changes envisaged are far-reaching, making the intermediaries (service providers) accountable for the content on their platforms and no longer eligible for safe harbour protection (see table ‘Amendments are lethal to free speech’ below).

The amended guidelines define intermediaries as any service that has more than five million users in India. Such companies will have to take down questionable content within 24 hours and ensure the traceability of messages. User data will have to be provided within 72 hours of a government request. Intermediaries would also have to mandatorily upload filters to proactively weed out unlawful or noxious content. Besides, they would have to preserve their records for at least 180 days to aid government investigators.

Champions of free Internet and rights organisations worry that the amendments would facilitate unchecked surveillance and seriously undermine the right to privacy. A global coalition of civil society organisations and security experts has warned Prasad that the change will undermine the fundamental right to privacy of users without addressing the problems that the ministry seeks to resolve.

“These not only violate Indian constitutional standards regarding fundamental rights and international human rights law, but also chill free expression and access to information,” the coalition said.

Government brakes on frivolous content

The central criticism against the proposed amendment is that it contravenes a landmark Supreme Court judgement. In the 2015 Shreya Singhal writ case on online freedom of speech, the court clearly stated that online content could be removed from intermediary platforms only by government or court order. This protected the platforms from liability and served as a brake on frivolous or agenda-driven take-down demands.

Software Freedom Law Centre, Delhi, a legal services organisation that works to protect freedom in the digital world, says the requirement for automated filters in the amended guidelines to remove illegal content runs foul of the law and current jurisprudence.

Tech giants and security experts have joined the free speech lobby in opposing the liability regime on account of the technical problems. They complain that many of the proposals would be impossible to implement and ought to be dropped, such as the use of automated tools to proactively indentify and remove unlawful content.

In their latest missive to Prasad, security professionals pointed out that services using end-to-end encryption cannot provide the level of monitoring required by the Indian government: “Whether it’s through putting a ‘backdoor’ in an encryption protocol, storing cryptographic keys in escrow, adding silent users to group messages, or some other method, there is no way to create ‘exceptional access’ for some without weakening the security of the system for all.”

India is not the only country to go in for such measures. Germany has enforced strict laws, called NetzDG, that call for rapid removal of hate speech and other toxic content or be hit with fines amounting to €50 million. NetzDG, described as “the most ambitious attempt by a Western state to hold social media platforms responsible” for dangerous online speech, is serving as the template for other nations — first Russia and now India have copied sections of the German law.

Australia’s crackdown on social media companies began much earlier in 2015 and entails huge fines for intermediaries, while the UK has put in place stringent rules to combat specified illegal content, such as child abuse and terrorism.

Then why are critics assailing the proposed rules in India? That’s because MeitY proposals go well beyond the laws in these countries without including the procedural safeguards or the checks and balances that NetzDG and EU’s Draft Terrorist Content Regulation provide. The German law also puts strong emphasis on transparency from the platforms.

Another reason is that these democracies have appointed strong regulators to oversee the implementation of the privatised law enforcement. Australia has an eSafety Commissioner while the UK has selected Ofcom, the telecom regulator, as the Internet watchdog and is equipping it with the necessary powers to enforce the new “duty of care” laws.

The unease with the Indian guidelines is that it envisages a pervasive surveillance regime that is predicated on close cooperation between intermediaries and unspecified government agencies. And it comes at a time when the digital landscape is bleak for a large chunk of its roughly 500 million Internet users. Access has been cut off repeatedly and for long periods on account of political turbulence in different parts of the country, making it notorious for record shutdowns in recent years.

The frequent shutdowns in India, according to trackers, started in 2015 and it has had severe economic repercussions. Brookings Institution, a Washington-based think tank, estimates that Internet shutdowns cost countries about $2.4 billion between July 1, 2015, and June 30, 2016, with maximum losses incurred by India ($ 968 million). This figure was eclipsed in 2019 when shutdowns cost India over $1.3 billion, the third-highest after Iraq’s and Sudan’s, according to Internet research firm Top10VPN. The latter two are war-torn nations.

The intermediary liability regime will add another layer of opacity to an already bleak digital landscape. Public opinion on the measure is largely uninformed although there is some support for the proposed rules, usually from victims of hate attacks.

The overwhelming question that policy makers have to answer is this: Will filtering end targeted online attacks and disinformation campaigns against the most vulnerable communities in India? American academic Samuel Woolley, an expert on digital misinformation, told The Economist in an interview last month: “Computational propaganda campaigns are now a core strategy for political campaigns the world over. The political polarisation we see online mirrors the state of things offline.”

Amendments are lethal to free speech Traceability will undermine security for all users, lead to surveillance



Intermediaries to ensure “traceability” of messages by providing information on the originator and receivers of messages. Platforms will have to break end-to-end encryption or install a back-door and make all users vulnerable. It is an attack on the fundamental right to privacy



Automated filtering technology will result in censorship and choke free speech



Intermediaries to proactively monitor, delete “unlawful content” through automated tools. These will facilitate pre-censorship by suppressing free speech before it becomes public. Existing filters used by social media platforms are already notorious for taking down harmless content. It is against Supreme Court orders



Takedown of content within short timelines a major challenge to free speech



Intermediaries to take down illegal content in 24 hours and share information with government within 72 hours. This is not enough time to analyse requests, seek clarifications or remedies. It will create perverse incentive to takedown content and share user data without due process



Data retention antithetical to privacy



Intermediaries must preserve content requested by law enforcement for at least 180 days. It contradicts the principle of “storage limitation” recommended by the Srikrishna Committee

This was first published in Down To Earth's print edition (dated 1-15 March, 2020)