14 years is a very long time in Internet history.

On December 14th, 2004, Avnish Bajaj, now a co-founder at VC firm Matrix Partners, was arrested, and sent into judicial custody without bail until December 24 that year. Bajaj had gone to Delhi to meet the police, and help with an investigation related to the attempt at selling a copy of the infamous DPS MMS clip via Baazee.com. Baazee.com, which Bajaj had founded, and sold to eBay, was the precursor to eBay.in, and allowed users to buy and sell physical and digital products. The seller had put up a listing on Baazee, for the MMS clip, and offered to email users the clip, once the payment was done. Upon being informed about the clip, Baazee had removed it, and was assisting the police with the investigation.

Bajaj’s arrest was a significant event in Internet policy in India.

The importance of Safe Harbor for Internet businesses and users

When the IT Act was passed in 2008, despite its flaws, it brought in “safe harbor” for “intermediaries”. Intermediaries – which include social networks, messaging platforms, e-commerce marketplaces, video sharing sites, blogs (when it comes to comments that you leave on them), payment companies who enable transactions, domain registrars – are merely seen as entities that allow sharing of information, and not as “publishers” in the traditional sense of the word. Just as you shouldn’t be held liable for my comments, my video, platforms are protected from liability of how users use them. In the same vein, marketplaces are not responsible for the actions of sellers and most importantly, ISPs are not responsible for what you access. This limitation of liability, known as “Intermediary Liability protections”, ensure that platforms can enable billions of users to communicate, message, publish, sell, and interact.

Safe harbor is fundamental to the growth of the Internet.

Why safe harbor was strengthened in 2015.

For safe harbor, intermediaries, as per Section 79 of the IT Act, had to follow certain “due diligence” requirements. However, this wasn’t without its challenges: among the provisions of Section 79 was the requirement that in order to avoid liability, service providers have to take down certain content once the fact that it is of a certain type is brought to their “actual knowledge”. Intermediaries were acting on frivolous takedown notices, some of which chose to take down content instead of risking liability (read research). Content which conformed to vaguely defined terms, like “grossly harmful”, “obscene”,”racially, ethnically objectionable, disparaging,” or legal content such as “blasphemous” and “pornographic”, paedophilic, libellous, invasive of another’s privacy, hateful, or relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever.

Section 79 was eventually written down by the Supreme Court as a part of the Shreya Singhal judgment in 2015, by defining “actual knowledge” as a court order and/or the notification by a government or its agency, and in conformity with reasonable restrictions on free speech, as per Article 19(2) of the Indian constitution. While this didn’t incorporate a recourse to the entity whose content would be taken down, it was still an improvement. What the judgment acknowledged was at the medium/enabler of free speech – the platforms – need to be protected in order to enable free speech.

Issues with amendments to the IT Rules

There is imminent threat to these protections in India right now: the due diligence requirements are being expanded, and will end up bringing in potential liability for many platforms, while also resulting in increased censorship of content. With the impact that fake news is having, the Government of India is now looking to expand the due diligence requirements under Section 79. A few points:

1. They don’t just affect content and fake news: Given the framing of changes to these rules by the government – in terms of the announcement, and subsequently on the basis of news reports based on limited understanding of issues, the assumption is that they will only impact WhatsApp. This is incorrect: the requirements, whether of proactive monitoring of “unlawful information” using automated tools, or requiring registration in India of platforms with more than 5 million Indian users, will also impact advertising networks, payment gateways, Wikipedia, Github, Pastebin, Stackoverflow, and several others.

Think about it: what will Wikipedia do?

2. Proactive censorship will have a disproportionate impact on free speech: The rules call for “proactively identifying and removing or disabling public access to unlawful information or content”.

Impact:

More than pre-censorship: These changes even go beyond what Kapil Sibal had proposed in 2011 of precensorship of social media content, because they cover all sorts of information, including code. ISPs essentially would not be able to function. They might just end up blocking parts of the Internet, just to avoid taking the risk of unlawful content being accessed via India.

These changes even go beyond what Kapil Sibal had proposed in 2011 of precensorship of social media content, because they cover all sorts of information, including code. ISPs essentially would not be able to function. They might just end up blocking parts of the Internet, just to avoid taking the risk of unlawful content being accessed via India. Disproportionate censorship: This will force platforms to overcompensate so as not to take on liability, and they will be more likely to take down content, products, code and information, rather than take on liability. We saw that in 2009-2011, and there is evidence to that end from Rishabh Dara’s research.

This will force platforms to overcompensate so as not to take on liability, and they will be more likely to take down content, products, code and information, rather than take on liability. We saw that in 2009-2011, and there is evidence to that end from Rishabh Dara’s research. Breaking encryption: This requirement for proactive monitoring will require the breaking of encryption for proactively identifying content, and checking of all the content and information that goes through the pipes.

This requirement for proactive monitoring will require the breaking of encryption for proactively identifying content, and checking of all the content and information that goes through the pipes. AI is incapable of dealing with it: Artificial Intelligence and Machine Learning have evolved, but they’re still not good enough to accurately identify human context, and judge when something is illegal or not. We’re not at Minority Report levels of advancement yet, and we’re seeing that with how platforms are using AI to take down content, and messing up. We really cant leave the job of judgment to anyone but qualified judges here.

3. Section 79 is an exemption section, not an enabling provision: the government of India is trying to bring in provisions like traceability of users on platforms, proactive monitoring of content (effectively surveillance), ensuring that assistance is provided to government agencies, informing users of terms and conditions once a month via amendments to this section, which it really doesn’t have the remit to do. Section 79 is a section that is meant to ensure that platforms to basic due diligence, and nothing more. To make it an enabling provision, they’ll have to amend the IT Act.

4. Traceability will break end to end encryption: Forcing traceability upon platforms is it significantly impacts privacy: end to end encryption will have to be broken for bringing in traceability, and this ends up making users of end to end encryption more vulnerable. This is a disproportionate requirement specifically with WhatsApp in mind. It is also deeply problematic because the Internet also enables marginalised communities to communicate, interact, publish content, and maybe find love: Before Section 377 was decriminalised, imagine how these rules would have impacted apps like Grindr.

Addressing encryption is essential: the government had experimented with it in the past. This is a backdoor means of doing the same thing, and clearly beyond the remit of Section 79. We need surveillance reform and an encryption policy. The government should start a separate process, instead of trying to dump everything into Section 79.

5. The 50 lakh users limit is vague and will cover everyone: India has around 350 million Internet users, and over 500 million Internet connections. In this context, 50 Lakh, or 5 million, is 1.43% of India’s Internet user base. Think of the number of apps that have 5 million Indian users. Every moderately large advertising network and every single ad exchange probably does. Each of these, under the amendments to the rules, will be required to:

(i) be a company incorporated under the Companies Act, 1956 or the Companies Act,2013; (ii)have a permanent registered office in India with physical address; and (iii) Appoint in India, a nodal person of contact and alternate senior designated functionary, for 24×7 coordination with law enforcement agencies and officers to ensure compliance to their orders/requisitions made in accordance with provisions of law or rules.

How many apps in the world will do this? What will the government do – block apps that don’t set up an office in India? Or will the apps decide not to take on the liability and block out the Indian market itself? The government probably wants to define significant intermediaries: and by that, I mean they want to address WhatsApp. This is an underhanded, backdoor means of addressing challenges posed by significant intermediaries.

Secondly, what is not clear here is what the government means by 50 lakh users – does it mean daily active users, monthly active users, or registered users?

6. It will also impact curated content providers: One fallacious assumption that is made, is that the word “intermediary” covers only platforms like, say, YouTube, and curated content platforms, and in a way, create a distinction between different types of service providers. My assessment is that this is incorrect: back in 2011, the initial version of the IT Rules even included definitions for “bloggers”, before a final version focused on just “intermediaries“. The rationale was that while content on blogs might be the liability of the blogger, any user generated activity was the responsibility of the blogger, only when it is brought to their actual knowledge. Thus, in case content providers enable any user generated activity – for example, Netflix might enable integration of community reviews to give viewers more context on shows – safe harbor protections would apply to them. A dilution of safe harbor impacts even them.

P.s.: We are doing discussions on amendments to safe harbor in Bangalore (on the 25th of January) and Delhi (on the 7th of February). Apply to attend here.

Most importantly, please participate in the IT Rules consultation. Details here.