Do we finally have the technology to stop internet piracy for good? Publishers and content creators have long been trying to find a “secret weapon” that will win the war against digital pirates. With recent advances in machine learning and pattern-matching algorithms, is that goal now finally within reach? And perhaps a better question might be: should we use these technologies? What will be the consequences if we do?

This is not an abstract question. At the heart of the issue is the EU’s Directive on Copyright in the Digital Single Market, which was narrowly approved by the European Parliament’s Legal Affairs Committee on 20 June 2018. That doesn’t mean the Directive will now automatically be adopted into law; the next step is for the entire European Parliament to vote on the proposed legislation on 5 July. If that vote is unsuccessful, then the law may still be amended.

One of the most controversial elements of the Directive is Article 13, which requires online platforms to implement “effective and proportionate measures” to prevent copyright content being uploaded to their services by users. Failure to do so will potentially make the digital platform themselves liable. Critics worry that this will mean, in practice, that platforms are forced to preemptively filter content uploaded by users in order to screen out copyrighted material. This could result (according to activists) in everything from grumpy cat memes to embarrassing footage of politicians being effectively unpublishable on social media and other networks.

Curious to know more about the EU Copyright Directive? We’ve put together some facts and figures in the infographic below (click for a bigger version). What do our readers think? We had a comment sent in from Matej, who thinks that the EU Copyright Directive is so flawed that it essentially amounts to what he calls “censorship”. He argues all uploaded content will need to be monitored. Would that then inevitably lead to false-positives, or moderators erring on the side of caution and deleting non-infringing material?

To get a reaction, we spoke to Cory Doctorow, author, digital rights activist, and co-editor of the blog Boing Boing. What would he say to Matej’s point? Did he think the use of the word “censorship” was accurate?

The intended consequence here is to create a system where software works out whether or not something is likely to infringe copyright, and does it to the best of software’s ability, and the intended outcome of that is that we will sacrifice the principle of ‘fair dealing’ in order to allow software to do this. Because software can’t look at a picture and know that the reason it’s a partial match for another picture is because it’s a critical commentary or parody that reuses the picture in a way that’s consistent with fair dealing, or whether it’s just a slavish reproduction or an inadequate production and the reason there are differences is because there is some degradation in the copying process. And since software can’t tell the difference, it’s going to err on the side of caution. This is a rule, after all, designed to make companies prevent infringing material from ever seeing the light of day. It’s not about responding to complaints, but rather about preemptive removal. So, the intended consequence is that fair dealing is the collateral damage in the war on copyright infringement. There’s a really significant problem with that, and that may not be obvious to people unfamiliar with the ins-and-outs of copyright law. Because copyright is, at its core, a state-regulated monopoly over expression. Certain words, or images, or tunes, or other forms of expressive speech are put under monopolistic control of the people who created them. And I’m one of those creators. I like having a monopoly over my words. But I acknowledge that monopolies of expression are, by definition, dangerous to expression. So, the parliaments of the world, since the first days of the first copyright, have always carved out or created an escape or safety valve for free expression. And that’s called ‘fair dealing’. That’s the right to do things not with the permission of the copyright holder, but without it, and even against the wishes of the copyright holder. Because, for reasons that should be obvious to everyone, giving a person who is in line for some criticism a veto over who can make that criticism is not good public policy, and will not produce robust critical views. So, that’s the intended consequence.

So, that would be the Directive doing what it’s supposed to do. By the sounds of it, however, there are also some unintended consequences that Cory Doctorow is predicting?

The unintended consequence is that these systems are not very good at making partial matches. Our software just can’t make a match and tell you to a certainty whether this is a substantive duplicate of that. And, as a consequence, a large number of materials that are not copyrighted, or whose creators wish to have them shared, or that don’t represent an infringement in some other way, will be blocked. We know that the rate of error in the best machine learning systems is in the 10-15% range, and in the median machine learning system is more like 40%. And when we’re talking about billions of tweets, and billions of Facebook updates, and hundreds of millions of videos, and billions of photos, then an error rate of even 1% would fill the Bodleian library with materials that should be allowed, that will not be allowed, and it would fill the Bodleian every day. That is the unintended, but absolutely foreseeable, consequence; when you fish with a great bit tuna net, you catch a lot of dolphins in it.

But does this really justify the label ‘censorship’? It sounds like there might be accidental false positives, but surely censorship implies intent. Or are there other knock-on effects of the Directive that Doctorow is worried about?

There is also a malicious consequence, which is again, I think, a foreseeable one, but which is in some ways different from the unintended consequence. The rule requires companies to allow for mass scale claims of copyright… and it has to be live straight away, because the last thing you’d want if you were, say, Disney, is to release a hit cartoon and then, between your release of that cartoon and the copyright filters kicking in, there’s a 72-hour window when you expect to make all of your money, during which the copyright filters aren’t working. The intersection of a system that allows for mass uploads and that also permits no delay is a system that will be very easy to abuse. So, perhaps someone who’s just a prankster could upload all of the works of Shakespeare and put them in the database of copyrighted works under their own name, and nobody would be allowed to quote Shakespeare on any of the European servers until people who work for those companies could comb through billions of entries to locate the thousands of false ones, which is a process that might take months, or even years to unwind. Plus, the person who’s making these copyright submissions can use bots to make and replace the submissions at speed. And the companies that are sorting through the submissions have to use human intelligence to comb back through it and find the malicious ones. But even worse, and even scarier, is the possibility that people will implement tactical censorship at specific moments in our public discourse. So, in the run-up to the Turkish elections prior to the most recent ones, high-ranking government officials closely associated with Erdoğan were recorded soliciting bribes, and videos containing that audio track were uploaded to YouTube. This prompted the country to block YouTube and to take other actions to restrict access to those videos. But, in future, if there is a specific document file, video, or other material that is incredibly relevant to a pending event like a referendum or election, people who don’t want that material to be in the public eye during a key moment could selectively lay claim to a copyright over it, and do so everywhere all at once using relatively easy-to-write bots that could seem to come from countries all over the world, that would not seem to be acting in concert. They could simply silence key materials at key moments. We are at the beginning of the age of information warfare, and we are crafting a super-weapon that will require virtually no skill and virtually no resources to wield in the form of these censorship machines. So, I think that ‘censorship’ is the right word, for all of those reasons.

We also had a comment from Michalis, who argues that copyright is an essential motivation in the creation of innovative ideas. Can we at least agree that there needs to be some form of strong copyright protection in order to encourage creativity and support creators?

To get a response, we put Michalis’ comment to Maud Sacquet, Senior Manager of Public Policy at the Computer and Communications Industry Association. What would she say?

Balanced copyright rules are indeed key for innovation and creation. However, the proposed copyright reform is not fit for the digital age – but rather backward looking. It requires, for example, the implementation of mandatory filters for content uploaded by users on open platforms. This could create censorship on a large scale. This is why we urge MEPs to contest the proposal and to support instead balanced copyright rules which respect online rights and support Europe’s digital economy.

Next up, we had a comment from Mykolas, who takes completely the opposite approach. He says he doesn’t really believe in the idea behind copyright, and is basically able to get anything he wants for free online. What would Maud Sacquet say to him?

People are rightly frustrated with copyright’s complexity, but throwing out all incentives for creativity isn’t the solution. The best solution to piracy is a robust marketplace of competitive, legitimate alternatives and balanced copyright rules. However (and as explained above) we oppose this copyright proposal because it is neither balanced nor fit for the digital age. Content uploaded by European Internet users could be taken down on a large scale, despite being legal, because filters cannot identify copyright exceptions such as parody or quotation.

Finally, we asked Cory Doctorow what he would recommend people actually do in response to his warnings. Here’s what he had to say:

Saveyourinternet.eu is the place to go to contact your MEP, and you should contact your MEP! I know in the UK, oftentimes that means contacting a UKIP or other far-right MEP, or sometimes a Labour MEP. And, weirdly, both of them are on the wrong side of this. This is an area where I think they’re vulnerable, because you can talk to them about how their future, the future of saying things that are unpopular in the halls of power, depends on there being a diversity of places to speak. When you consolidate speech into a tiny number of hands then you end up reducing the places where these heterodox ideas can be discussed. So, that’s where I would urge you to start. I’d urge you to keep your eyes on the prize here. There are European elections coming up [in 2019], and there is no MEP in Europe, except for maybe the Pirate Party, whose campaign is about copyright. None of them are hoping to get their job back by being a copyright campaigner. So every one of them will be willing to jettison copyright if it’s a choice between them and the continued work in the job that they’re doing now. And if we show them that they’re vulnerable on copyright, that copyright is a thing that could cost them their jobs, this is a golden opportunity to get them to back away from it.

Is copyright reform criminalising an entire generation? Will new EU copyright laws censor the internet? Or will they protect content creators and ensure they are paid fairly for their work? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!

IMAGE CREDITS: (c) BigStock – Javier Brosch; PORTRAIT CREDITS: Maud Sacquet (cc) Flickr – Lisbon Council; Cory Doctorow (cc) Wikimedia – Salimfadhley

Editorially independent content supported by: Google. See our FAQ for more details.