If it feels like certain high-profile YouTubers get way more lenience when it comes to content moderation than everyone else does, that's apparently because they really do, according to a new report.

The Washington Post spoke with almost a dozen former and current YouTube content moderators, who told the paper that the gargantuan video platform "made exceptions" for popular creators who push content boundaries.

“Our responsibility was never to the creators or to the users," one former moderator told the Post. "It was to the advertisers.”

The employees told the Post in interviews that YouTube's internal guidelines for how to rate videos are confusing and hard to follow. Workers are also "typically given unrealistic quotas by the outsourcing companies of reviewing 120 videos a day," the Post reports, which makes it difficult to scrutinize longer videos without skipping over content that may turn out to be problematic. (A YouTube spokesperson told the Post it does not give moderators quotas.)

The flashpoint

The difference status makes became clear in the first days of 2018, moderators said, when Logan Paul drew criticism for posing with and apparently mocking the body of a dead man in a video. Two weeks passed before Google took any public action and removed Paul from the Google Preferred advertising program.

A few weeks later, Paul Tasered a dead rat in another video, again a violation of YouTube's guidelines against violent and graphic content. That video earned him a two-week suspension from monetizing his content, during which ads were disabled entirely on his videos—a move criticized as both insufficient and inconsistent.

Many employees inside the company were just as unhappy with the situation as outside observers were. The decision not to ban Paul permanently from the platform "felt like a slap in the face,” a moderator told the Post. “You’re told you have specific policies for monetization that are extremely strict. And then Logan Paul broke one of their biggest policies and it became like it never happened.”

YouTube told the Post it does indeed have two sets of content expectations, but the company said that meant higher standards for advertising partners than for the general public. That seems partly due to the fallout of the Paul incidents, which led YouTube to say it would impose stronger vetting on content in its Google Preferred program.

Widespread issues

YouTube's issues with navigating the moderation and monetization of extreme and graphic content are not new. There's a years-long track record of scandals costing the company some kind of ad revenue, and with each new crisis universal standards seem harder for the company to manage.

In 2018, the company banned professional conspiracy theorist Alex Jones from its platform, following moves from Twitter and Facebook to do the same. Jones was in violation of YouTube's policies against hate speech and harassment, the company said at the time. And it took until June of this year for YouTube to ban videos that promote Nazi ideology and other forms of white supremacist hate speech, again following intense pressure from both inside and outside the company.

The workers on the front lines of content moderation are particularly disempowered to push for change or clarity because they, like their peers at most major platforms, are not Google employees but rather work for third-party outsourcing firms. The toll these jobs take on their workers is now extremely well documented, both domestically and abroad. One former moderator filed a lawsuit against Facebook in 2018, alleging the work left her with severe post-traumatic stress disorder.

One YouTube moderator told the Post that ultimately the bottom line is, well, the bottom line. “The picture we get from YouTube is that the company has to make money," they said. "So what we think should be crossing a line, to them isn’t crossing one.”