On June 20, the EU's legislative committee will vote on the new Copyright directive, and decide whether it will include the controversial "Article 13" (automated censorship of anything an algorithm identifies as a copyright violation) and "Article 11" (no linking to news stories without paid permission from the site).



These proposals will make starting new internet companies effectively impossible — Google, Facebook, Twitter, Apple, and the other US giants will be able to negotiate favourable rates and build out the infrastructure to comply with these proposals, but no one else will. The EU's regional tech success stories — say Seznam.cz, a successful Czech search competitor to Google — don't have $60-100,000,000 lying around to build out their filters, and lack the leverage to extract favorable linking licenses from news sites.

If Articles 11 and 13 pass, American companies will be in charge of Europe's conversations, deciding which photos and tweets and videos can be seen by the public, and who may speak.

The MEP Julia Reda has written up the state of play on the vote, and it's very bad. Both left- and right-wing parties have backed this proposal, including (incredibly) the French Front National, whose Youtube channel was just deleted by a copyright filter of the sort they're about to vote to universalise.

So far, the focus in the debate has been on the intended consequences of the proposals: the idea that a certain amount of free expression and competition must be sacrificed to enable rightsholders to force Google and Facebook to share their profits.

But the unintended — and utterly foreseeable — consequences are even more important. Article 11's link tax allows news sites to decide who gets to link to them, meaning that they can exclude their critics. With election cycles dominated by hoaxes and fake news, the right of a news publisher to decide who gets to criticise it is carte blanche to lie and spin.





Article 13's copyright filters are even more vulnerable to attack: the proposals contain no penalties for false claims of copyright ownership, but they do mandate that the filters must accept copyright claims in bulk, allowing rightsholders to upload millions of works at once in order to claim their copyright and prevent anyone from posting them.





That opens the doors to all kinds of attacks. The obvious one is that trolls might sow mischief by uploading millions of works they don't hold the copyright to, in order to prevent others from quoting them: the works of Shakespeare, say, or everything ever posted to Wikipedia, or my novels, or your family photos.





More insidious is the possibility of targeted strikes during crisis: stock-market manipulators could use bots to claim copyright over news about a company, suppressing its sharing on social media; political actors could suppress key articles during referendums or elections; corrupt governments could use arms-length trolls to falsely claim ownership of footage of human rights abuses.

It's asymmetric warfare: falsely claiming a copyright will be easy (because the rightsholders who want this system will not tolerate jumping through hoops to make their claims) and instant (because rightsholders won't tolerate delays when their new releases are being shared online at their moment of peak popularity). Removing a false claim of copyright will require that a human at an internet giant looks at it, sleuths out the truth of the ownership of the work, and adjusts the database — for millions of works at once. Bots will be able to pollute the copyright databases much faster than humans could possibly clear it.





I spoke with Wired UK's KG Orphanides about this, and their excellent article on the proposal is the best explanation I've seen of the uses of these copyright filters to create unstoppable disinformation campaigns.



Doctorow highlighted the potential for unanticipated abuse of any automated copyright filtering system to make false copyright claims, engage in targeted harassment and even silence public discourse at sensitive times. "Because the directive does not provide penalties for abuse – and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display – it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses. The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare. "Making these claims will be MUCH easier than sorting them out – bots can use cloud providers all over the world to file claims, while companies like Automattic (WordPress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones – and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability."

The EU's bizarre war on memes is totally unwinnable [KG Orphanides/Wired UK]



EU censorship machines and link tax laws are nearing the finish line

[Julia Reda]