This Will Backfire: Google/Facebook Using Copyright Tools To Remove 'Extremist' Content

from the slippery-slippery-slope dept

The technology was originally developed to identify and remove copyright-protected content on video sites. It looks for "hashes," a type of unique digital fingerprint that internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly.

Are you f kidding me???

Banned for 7 days for posting the photo of my friend who was killed in Syria. pic.twitter.com/J89wDCFR3H — Rana H. (@RanaHarbi) June 26, 2016

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

They've been pressured to do this for a while, but according to a Reuters report over the weekend, both Google and Facebook have started using some of their automation tools to start automatically removing "extremist" content . Both are apparently using modifications to their copyright takedown technology:In other words, the companies aren't (yet) using these tools to automatically determine what's "extremist" and block it, but rather they're just keeping it from being posted. Of course, we're all quite familiar with how badly this can fail in the copyright context, and it's quite likely the same thing may happen in this context as well. Remember, in the past, under pressure from a US Senator, YouTube took down a Syrian watchdog's channel , confusing itswith extremist content. And, hell, the same day that this was reported, a reporter on Twitter noted that her own Facebook account was suspended because she posted a picture of a friend of hers who had been killed in Syria.And that's a big part of the issue here: context totally matters. One person's extremist content may be quite informative/useful in other contexts.Yes, I know that there's a big push for "countering violent extremism" online these days. And the government, in particular, has been putting lots of pressure on the big tech companies to "do something." But I'm curious what anyone thinks this is actually doing. The people who want to see these videos will still see these videos. It still seems like a fairly exaggerated threat to think that someone just watching some YouTube videos will suddenly decide that's why they're going to join ISIS. And, if that is the case, it seems like a much better response is counterspeech -- put up other videos that rebut the claims in the "extremist" videos, rather than blocking them across the board. Of course, if they're being matched via ContentID, even someone offering commentary on a video to debunk claims may suddenly find out that their videos are being taken down as well. I can't see how that's at all helpful.

Filed Under: censorship, contentid, copyright, isis, platforms, radical extremism, videos

Companies: facebook, google