Content Moderation At Scale Is Impossible: The Case Of YouTube And 'Hacking' Videos

from the how-do-you-deal-with-this? dept

Last week there was a bit of an uproar about YouTube supposedly implementing a "new" policy that banned "hacking" videos on its platform. It came to light when Kody Kinzie from Hacker Interchange, tweeted about YouTube blocking an educational video he had made about launching fireworks via WiFi:

We made a video about launching fireworks over Wi-Fi for the 4th of July only to find out @YouTube gave us a strike because we teach about hacking, so we can't upload it. YouTube now bans: "Instructional hacking and phishing: Showing users how to bypass secure computer systems" — Kody (@KodyKinzie) July 2, 2019

Kinzie noted that YouTube's rules on "Harmful or dangerous content" now listed the following as an example of what kind of content not to post:

Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.

This resulted in some quite reasonable anger at what appeared to be a pretty dumb policy. Marcus "Malware Tech" Hutchins posted a detailed blog post on this change and why it was problematic, noting that it simply reinforces the misleading idea that all "hacking is bad."

Computer science/security professor J. Alex Halderman chimed in as well, to highlight how important it is for security experts to learn how attackers think and function:

I've taught college-level computer security at @UMich for 10 years, and the most important thing we teach our students is how attackers operate. YouTube's new policy will do nothing to stop bad guys, but it will definitely make it harder for the public to learn about security. https://t.co/1wvB63c5aB — J. Alex Halderman (@jhalderm) July 3, 2019

Of course, some noted that while this change to YouTube's description of "dangerous content" appeared to date back to April, there were complaints about YouTube targeting "hacking" videos last year as well.

Eventually, YouTube responded to all of this and noted a few things: First, and most importantly, the removal of Kozie's videos was a mistake and the videos have been restored. Second, that this wasn't a "new" policy, but rather just the company adding some "examples" to existing policy.

This raises a few different points. While some will say that since this was just another moderation mistake and therefore it's a non-story, it actually is still an important point in highlighting the impossibility of content moderation at scale. You can certainly understand why someone might decide that videos that explain how to "bypass secure computer systems or steal user credentials and personal data" would be bad and potentially dangerous -- and you can understand the thinking that says "ban it." And, on top of that, you can see how a less sophisticated reviewer might not be able to carefully distinguish the difference between "bypassing secure computer systems" and some sort of fun hacking project like "launching fireworks over WiFi."

But it also demonstrates that there are different needs for different users -- and having a single, centralized organization making all the decisions about what's "good" and what's "bad," is inherently a problem. Going back to Hutchins' and Halderman's points above, even if the Kinzie video was taken down by mistake, and even if the policy is really supposed to be focused on nefarious hacking techniques, there is still value for security researchers and security professionals to be able to keep on top of what more nefarious hackers are up to.

This is not all that different than the debate over "terrorist content" online -- where many are demanding that it be taken down immediately. And, conceptually, you can understand why. But when we look at the actual impact of that decision, we find that removing such content appears to make it harder to stop actual terrorist activity, because it's now harder to track and to stop.

There is no easy solution here. Some people seem to think that there must be some magic wand that can be waved that says, "leave up the bad content for good people with good intentions to use to stop that bad behavior, but block it from the bad people who want to do bad things." But... that's not really possible. Yet, if we're increasingly demanding that these centralized platforms rid the world of "bad" content, at the very least we owe it to ourselves to look to see if that set of decisions has some negative consequences -- perhaps even worse than just letting that content stay up.

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, content moderation at scale, hacking, hacking videos

Companies: youtube