Users around the world have been outraged by the European Commission's proposal to require websites to enter into Shadow Regulation agreements with copyright holders concerning the automatic filtering of user-generated content. This proposal, which some are calling RoboCopyright and others Europe's #CensorshipMachine, would require many Internet platforms to integrate content scanning software into their websites to alert copyright holders every time it detected their content being uploaded by a user, without any consideration of the context.

People are right to be mad. This is going to result in the wrongful blocking of non-infringing content, such as the fair use dancing baby video. But that's only the start of it. The European proposal may also require images and text—not just video—to be automatically blocked on copyright grounds. Because automated scanning technologies are unable to evaluate the applicability of copyright exceptions, such as fair use or quotation, this could mean no more image macros, and no more reposting of song lyrics or excerpts from news articles to social media.

Once these scanning technologies are in place, it will also become far easier for repressive regimes around the world to demand that Internet platforms scan and filter content for purposes completely unrelated to copyright enforcement—such as suppressing political dissent or enforcing anti-LGBT laws. Even when used as originally intended, these automated tools are also notoriously ineffective, often catching things they shouldn't, and failing to catch things they intend to. These are among the reasons why this new automatic censorship mechanism would be vulnerable to legal challenge under Europe's Charter of Fundamental Rights, as we explained in our last post on this topic.

A Filtering Mandate Infringes the Manila Principles on Intermediary Liability

Two years ago, well before the current European proposal was placed on the table, EFF and our partners launched the Manila Principles on Intermediary Liability. Despite not being a legal instrument, the Manila Principles have been tremendously influential. It has been endorsed by over 100 other organizations and referenced in international documents, such as reports by United Nations rapporteurs and the Organization for Security and Co-operation in Europe (OSCE), along with the Global Commission on Internet Governance's One Internet report.

According to the Manila Principles (emphasis added):

Intermediaries should be shielded from liability for third-party content Any rules governing intermediary liability must be provided by laws, which must be precise, clear, and accessible. Intermediaries should be immune from liability for third-party content in circumstances where they have not been involved in modifying that content. Intermediaries must not be held liable for failing to restrict lawful content. Intermediaries must never be made strictly liable for hosting unlawful third-party content, nor should they ever be required to monitor content proactively as part of an intermediary liability regime.

Forcing Internet platforms (i.e., intermediaries) into private deals with copyright holders to automatically scan and filter user content is, effectively, a requirement to proactively monitor user content. Since sanctions would apply to intermediaries who refuse to enter into such deals, this amounts to an abridgment of the safe harbor protections that intermediaries otherwise enjoy under European law. This not only directly contravenes the Manila Principles, but also Europe's own E-Commerce Directive.

The Manila Principles don't ban proactive monitoring obligations for the sake of the Internet intermediaries; the ban is to protect users. When an Internet platform is required to vet user-generated content, it has incentive to do so in the cheapest manner possible, to ensure that its service remains viable. This means relying on error-prone automatic systems that place copyright holders in the position of Chief Censors of the Internet. The proposal also provides no recourse for users in the inevitable cases where automated scanning goes wrong.

That doesn't mean there should be no way to flag copyright-infringing content online. Most popular platforms already have systems in place that allow their users to flag content—for copyright infringement or terms of service or community standards violations. In Europe, the United States, and many other countries, the law also requires platform operators to address infringement notices from copyright owners; even this is the subject of considerable abuse by automated systems. We can expect to see far more abuse when automated copyright bots are also put in charge of vetting the content that users upload.

Europe's mandatory filtering plans would give far too much power to copyright holders and create onerous new barriers for Internet platforms that seek to operate in Europe. The automated upload filters would become magnets for abuse—not only by copyright holders, but also governments and others seeking to inhibit what users create and share online.

If you're in Europe, you can rise up and take action using the write-in tool below, put together by the activists over at OpenMedia. This tool will allow you to send Members of the European Parliament your views on this repressive proposal, in order to help ensure that it never becomes law.