by David Rizk, EFF Legal Intern

As the UN Special Rapporteur on Freedom of Opinion and Expression noted recently “[Internet] Intermediaries play a fundamental role in enabling Internet users to enjoy their right to freedom of expression and access to information. Given their unprecedented influence over how and what is circulated on the Internet, States have increasingly sought to exert control over them and to hold them legally liable for failing to prevent access to content deemed to be illegal.”

In countries across the world, we're witnessing escalating efforts to turn Internet intermediaries into chokepoints for online free expression. Internet intermediaries—Internet Service Providers (ISPs), online service providers like Twitter and Google, and even Internet cafes—are increasingly subject to legal demands by private citizens and governments worldwide for allegedly infringing or illegal content to be removed, filtered or blocked, and for mandatory collection and disclosure of Internet users' personal data. At the same time, whether Internet intermediaries have liability for content posted by their users, and in what circumstances, remains unsettled in most of the world.

This is especially true in the developing world, where Internet usage is rapidly growing. In leading developing countries like Brazil and India, policymakers are just now beginning to confront the issue and to enact rules that specify what steps intermediaries must take to avoid liability for user generated content that is allegedly obscene, infringing, defamatory, or otherwise illegal. If local law enforcement authorities and international rights holders associations have their way, intermediaries will be saddled with strict obligations to take down (or, worse, monitor) content, and retain user data for investigatory purposes, turning litigation-averse intermediaries into de facto censors.

These risks are very much in evidence in India, which is currently clarifying and refining its own standard of liability for Internet intermediaries. After a particularly notorious case holding the managing director of a popular online marketplace, Banzee.com, personally liable for a user's offer to sell an obscene video, the Indian Parliament amended the Information Technology Act in 2008, ostensibly to curb the liability of intermediaries for user content. Taking the EU E-Commerce Directive as its model, the Act extends safe harbor protection to services that 1) are merely transmission conduits, 2) temporarily cache content, or 3) host content and exercise "due diligence" in complying with the Act and other government regulations. Unfortunately, the scope of safe harbor immunity is unclear, with some courts arguing that secondary liability for copyright infringement is not precluded.

On February 7, 2011, the government released proposed administrative regulations to further clarify the meaning of hosts' obligations to perform "due diligence." Disappointingly, the drafting process was not very open or inclusive. The government called for comments on the rules by the end of February – a particularly short period - and when newspapers and civil liberties advocates criticized the rules, the government responded with a defensive press release asserting: "these due diligence practices are the best practices followed internationally by well-known mega corporations operating on the Internet." One lesson for other developing democracies is that the process matters, not just for legitimacy, but also for purposes of producing satisfactory substantive provisions.

The rules came into force quietly in April. Their overbroad scope poses the greatest problem. They require intermediaries to adopt terms of service that prohibit users from hosting, displaying, publishing, sending or sharing any proscribed content, including not just obscene or infringing content, but also any material that threatens national "unity" or "integrity," "public order," or is that "grossly offensive or menacing in nature," "disparaging," or "otherwise unlawful in any manner whatever." Such a broad standard lacks clear limits on what kinds of content may be taken down and invites abuse.

To make matters worse, the takedown procedure is swift and harsh. Once an intermediary discovers, or is notified of proscribed content, it must "disable" the content within 36 hours—effectively precluding any investigation of a complaint's legitimacy. Under the rules, intermediaries are also authorized to immediately terminate access or usage rights. They must also preserve related user records for 90 days for investigatory purposes. This is not a flexible, discretionary standard: under the rules, intermediaries must "strictly follow the provisions of the Act or any other laws."

Unlike the rules of the 1998 United States' Digital Millennium Copyright Act (DMCA), users in India do not have any recourse if their content is removed wrongfully. Nor are there any safeguards against abuse: the rules do not require that the party lodging the complaint have any rights or even have a good faith basis for believing that the content is illegal.

It is not difficult to see where this leads. Even before the 2008 amendments, Indian law enforcement authorities were apparently prepared to cut side deals with intermediaries to more swiftly delete content and gain greater access to user data, and not always to great crime-fighting effect. At the same time, technology companies in India are struggling to keep up with law enforcement demands. As for users, the rules will likely create a parallel process to the judiciary for dealing with legal matters: If you're an an entity in India that wants to censor a viewpoint online, why file a copyright or defamation suit in India's judicial system, which suffers from a 30 million case backlog, when you can have the material permanently removed without any further process within 36 hours simply by dropping an email to Google? The stakes for citizens’ free expression in India are very high indeed. We hope that reports that the Indian government is considering revising the regulations to provider greater liability protections for Internet intermediaries come to pass.