Google Moderation Team Decides My Piece About The Impossible Nature Of Content Moderation Is 'Dangerous Or Derogatory'

from the thanks-for-proving-my-point dept

Well, well. A few weeks back I had a big post all about the impossibility of moderating large content platforms at scale. It got a fair bit of attention, and has kicked off multiple discussions that are continuing to this day. However, earlier this week, it appears that Google's ad content moderation team decided to help prove my point about the impossibility of moderating content at scale when... it decided that post was somehow "dangerous or derogatory."

If you can't read that, it says that Google has restricted serving ads on that page because it has determined that the content is "dangerous or derogatory." And then it has a list of possible ways in which the content is either "dangerous or derogatory."

Dangerous or derogatory content As stated in our program policies, Google ads may not be placed on pages that contain content that: Threatens or advocates for harm on oneself or others;

Harasses, intimidates or bullies and individual or groups of individuals;

Incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or maginalization.

Huh. I've gone back and read the post again, and I don't see how it can possibly fall into any of those categories. Now, if I were a conspiracy theory nutcase, I'd perhaps argue that this was somehow Google trying to "silence" me for calling out its awful moderation practices. Of course, the reality is almost certainly a lot more mundane. Just as the post describes, doing this kind of content moderation at scale is impossible to do well. That doesn't mean they can't do better -- they can (and the post has some suggestions). But, at this kind of scale, tons of mistakes are going to be made. Even if it's just as fraction of a percent of content that is wrongly "moderated," at the scale of content, it's still going to involve millions of pieces of legitimate content incorrectly flagged. It's not a conspiracy to silence me (or anyone). It's just the nature of how impossible this task is.

This is also not the first or second time Google's weird morality police have dinged us over posts that clearly do not violate any of their policies (at this point, we get these kinds of notices every few months, and we appeal, and the appeal always gets rejected without explanation). I'm just writing about this one because it's so... fitting.

The fact is these kinds of things happen all the time. Hell, there was a similar story a week ago as well, concerning Google refusing to put ads on a trailer for the documentary film The Cleaners... a film all about the impossibility of content moderation at scale. Coincidentally, I had just been invited to a screening of The Cleaners a week earlier, and it's a truly fantastic documentary, that does a really amazing job not just highlighting the people who sit in cubicles in the Philippines deciding what content to leave up and what to take down, but also laying out the impossibility of that task, and helping people understand the very subjective nature of these decisions, and how there's so much gray area that is left in the eye of the beholder (in this case, relatively low wage contract employees in the Philippines).

So those are two examples of moderators deciding (obviously incorrectly) to moderate content that shows the impossibility of moderating content well. While it does serve to reinforce the point of just how impossible this kind of moderation is, it's pretty obviously done without intent or political bias. It's just that when you have someone who has 5 seconds to make a decision, and they have to skim a ton of content without context, they're going to make mistakes. Lots of them.

Now, let's see if this post gets moderated too...

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: adwords, content moderation, dangerous, derogatory

Companies: google