Tuesday, June 19th, 2018 (10:36 am) - Score 1,394

The European Parliament will tomorrow vote on a reformed Copyright Directive, which among other things features provisions mandating online platforms to install automated upload filters that many fear could result in significant “over-blocking” of lawful content (censorship) and restrictions on news linking.

The proposed “content recognition technologies” appear to form part of the proposed Article 13, which makes intermediaries (instead of just end-users) liable for uploads by their users and seems to circumvent the existing E-Commerce Directive. It would essentially require businesses to implement automated filters that scan for and then block copyrighted videos, photos, music, text or code in user submitted content.

Article 13.1 Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter.

On the surface this no doubt sounds like a wonderful idea to some and we can’t blame Rights Holders for seeking to defend their property, while also ensuring a fair return for the creators of such content. The problem is that such automated filters tend to be dumb tools that will act aggressively in order to protect the operator against assuming any legal liability for something a user has shared (a natural consequence of the proposed law).

The challenge is already evident when considering the millions of copyright takedown notices that rights holders issue against Google and other websites today, which are themselves often auto generated. Many such requests end up being rejected because they accidentally target legal content or comment, including those uploaded by the creators themselves (e.g. last year Reddit rejected 81% of notices and WordPress 78%).

Over the weekend the YouTube channel for a major French political party, Front National, was also taken offline because of a decision made by the company’s automated filtering system, which is a touch ironic since the admittedly somewhat ‘far right’ party is also known to be a supporter of the aforementioned copyright reform. The party later described YouTube’s decision as “completely false … arbitrary, political, and unilateral.”

Automated filters simply don’t care if you have permission to upload something or not and they fail to understand context, as well as your right to freedom of speech or the creation of internet memes. As a coalition of 70 Internet and computing luminaries (e.g. Vint Cerf, Sir Tim Berners-Lee, Jimmy Wales etc.) recently put it in an open letter to the EU..

Open Letter by 70 Internet Pioneers (EFF) By requiring Internet platforms to perform automatic filtering all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users. Europe has been served well by the balanced liability model established under the Ecommerce Directive, under which those who upload content to the Internet bear the principal responsibility for its legality, while platforms are responsible to take action to remove such content once its illegality has been brought to their attention. By inverting this liability model and essentially making platforms directly responsible for ensuring the legality of content in the first instance, the business models and investments of platforms large and small will be impacted. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial. In particular, far from only affecting large American Internet platforms (who can well afford the costs of compliance), the burden of Article 13 will fall most heavily on their competitors, including European startups and SMEs. The cost of putting in place the necessary automatic filtering technologies will be expensive and burdensome, and yet those technologies have still not developed to a point where their reliability can be guaranteed.

The last point about the cost and complexity of introducing such a system to smaller businesses is a very valid point (it’s easy for the big boys but virtually unworkable for many smaller businesses). Admittedly the proposed law does pay some lip service to this issue by vaguely requiring the measures to be “appropriate and proportionate,” but that hardly qualifies as a well-structured safeguard.

Unfortunately Article 13 isn’t the only sting in the tail, we also have to worry about the scope of Article 11 (individual member states can choose how A11 is implemented), which could result in a situation where you can’t link to a news story or use a snippet of news content without first getting paid permission from the source. This would make starting a news website incredibly difficult, potentially even impossible, and the admin alone in such a vast online world is the stuff of nightmares.

More to the point such a change could discourage the distribution of news among people and websites (i.e. less traffic for news sites, not more = shrinking business). Bigger sites, such as Google and Facebook, would of course be able to use their power to negotiate favourable rates but smaller organisations would have no such clout.

A tax on linking may also enable some sites or organisations to effectively silence their critics via the backdoor, resulting in yet more censorship. This is just plain crazy. On the other hand Article 11 might still allow links to the domain for a site, just not the news article itself (unless you pay), which is silly.

The small bit of good news is that there is a lot of opposition to these measures from consumer groups, digital rights activists and more broadly. You can even voice your opposition to these measures via the https://saveyourinternet.eu website. But the rules may yet pass into law.

The EP is due to vote on all this between 20th and 21st June. The result could then become law during early July or by late September 2018, if the summer recess obstructs. Anybody thinking that Brexit will stop this should be cautious because, regardless of what the EU decides tomorrow, the UK may have to implement it before we leave and in any case the current Government supports many of these measures.

We hope that politicians wake up to the very real dangers of what they could pass tomorrow, ideally before it’s too late.

UPDATE 20th June 2018

We’re saddened to report that, despite significant opposition, the European Parliament’s Committee on Legal Affairs (JURI) has this morning approved Articles 11 (link tax) and 13 (upload filter) of their Digital Single Market’s (DSM) copyright proposals. A deeply depressing day for the internet and those of us who use it.

The new directive will now go to a full plenary vote in front of the European Parliament during early July but this is often little more than a rubber stamping process, given by those who do not appear to understand the full ramifications of what they are about to introduce. Unfortunately we’ve also seen this before, with some of the more technically unworkable aspects of GDPR.