Top Internet Companies Agree To Vague Notice & Takedown Rules For 'Hate Speech' In The EU

from the who-defines-what-hate-speech-is dept

Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.



The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.



In addition to the above, the IT Companies to educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines. The use of the notification system could be used as a tool to do this.

Today, on 31 May, European Digital Rights (EDRi) and Access Now delivered a joint statement on the EU Commission’s “EU Internet Forum”, announcing our decision not to take part in future discussions and confirming that we do not have confidence in the ill considered “code of conduct” that was agreed.

In short, the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

It's easy to say that "hate speech" is bad and that we, as a society, shouldn't tolerate it. But, reality is a lot more complicated than that, which is why we're concerned about various attempts to ban or stifle "hate speech." In the US, contrary to what many believe to be true, "hate" speech is still protected speech under the First Amendment. In Europe, that's oftenthe case, and hate speech bans are more common. But, as we've noted, while it seems like a no brainer to be against hate speech, the vagueness in what counts as "hate speech" allows that term to be expanded over and over again, such that laws against hate speech are now regularly used for government censorship over the public saying things the government doesn't like.So consider me quite concerned about the news out of the EU that the EU Commission has convinced all the big internet platform companies -- Google, Facebook, Twitter and Microsoft -- to agree to remove "hate speech" within 24 hours In other words, it sounds a lot like these companies have agreed to a DMCA-like notice-and-takedown regime for handling "hate speech." Let's be clear here:. That's what happens when you give individuals the ability to remove content from platforms. Obviously, these companies are private companies and can set whatever policies they want on keeping up or removing content, but when they come to an agreement with the EU Commission about what they'll remove and how quickly, reasonable concerns should be raised about how this will work in practice, what definitions will be used to determine "hate speech," what kinds of appeals processes there will be and more. And none of that is particularly clear.And, of course, very few people will raise these issues upfront because no one wants to be seen as beingof hate speech. And that's the real problem. It's easy to create rules for censorship by saying it's just about "hate speech," since almost no one will stand up and complain about that. But that opens up the door to all sorts of abuse -- whether in how "hate speech" is defined, as well as in how the companies will actually handle the implementation. Two major human rights groups -- EDRi and Access Now have already withdrawn from the EU Commission forum discussing all of this in protest of how these rules were put together:Their main concern was that the whole thing was set up directly between the EU Commission and the internet companies behind closed doors -- and when you're talking about issues that impact human rights and freedom of expression, that needs to be done openly and transparently.I recognize why many people may cheer on this move, thinking that it's a way to stop "bad stuff" from happening online, but beware the actual consequences of setting up an opaque process with a vague standard for pressuring platforms to censor content based on notices from angry people. If you don't think this will be abused in dangerous ways, you haven't been paying attention to the last two decades on the internet.

Filed Under: censorship, eu, eu commission, hate speech, internet, notice and takedown, platforms

Companies: facebook, google, microsoft, twitter, youtube