YouTube has for a long time been used as a platform for bad actors to launch massive campaigns of targeted harassment against individuals. After years of professing an inability to act and reduce such behavior, YouTube is finally updating its policies to reflect the ways bad actors actually tend to behave, and the site is promising to increase consequences against harassers.

Content that "maliciously insults" someone based on their membership in a legally protected class, such as race, gender, or sexual orientation, is now against the rules, YouTube said in a blog post today. "Veiled or implied" threats, of the sort that tend to rile up an online mob to go harass someone, are also now prohibited.

"Something we've heard from our creators is that harassment sometimes takes the shape of a pattern of repeated behavior across multiple videos or comments," YouTube added, catching up to what targets of coordinated online abuse campaigns have been saying for the better part of a decade. As such, the pattern of behavior will now be something the platform takes into account.

Accounts that repeatedly "brush up against" YouTube's new and improved harassment policy may face financial harm for doing so, the company now says:

We're tightening our policies for the YouTube Partner Program (YPP) to get even tougher on those who engage in harassing behavior and to ensure we reward only trusted creators. Channels that repeatedly brush up against our harassment policy will be suspended from YPP, eliminating their ability to make money on YouTube. We may also remove content from channels if they repeatedly harass someone.

Why now?

The change was spurred in part by pushback to YouTube's handling of a harassment campaign against journalist Carlos Maza earlier this year. Maza, who is Latino and openly gay, became a target of conservative personality Steven Crowder, who repeatedly hurled homophobic and racist invectives against Maza in the videos shared with his millions of subscribers.

YouTube investigated the reports against Crowder. On June 4, the company told Maza in a series of tweets that while Crowder's behavior was "hurtful," it did not violate YouTube's policies. One day later, after widespread negative reactions to that statement, YouTube amended its stance and demonetized Crowder, prohibiting him from selling ads on his YouTube videos.

At the same time, YouTube updated its hate speech policy to ban neo-Nazi material and similar white supremacist content. That policy update also prohibited "truther"-style denialist content, such as videos claiming the Holocaust or the 2012 mass shooting at Sandy Hook Elementary School never happened.

That update, however, had an extremely rocky rollout. As soon as the policy launched, a journalist who makes documentary films chronicling hate movements had content removed from YouTube, and his channel was demonetized.

Equally applied?

YouTube said in its statement that the new rules apply to everyone, "from private individuals, to YouTube creators, to public officials." Whether that actually translates into practice, however, is anyone's guess.

Historically speaking, YouTube has not been great about applying its already existing policies evenly across the board. Content moderators working on behalf of YouTube have reported the company deliberately exempts certain high-profile creators from enforcement. But the problem goes well beyond high-profile YouTube "influencers." If anything trips the company up, it's likely to be that claim that "public officials" are also subject to its policies.

Other social media platforms, including Facebook and Twitter, are having a hard time enforcing rules against inflammatory, racist, threatening, or otherwise policy-breaking content on their sites when it comes from politicians, especially but not exclusively US President Donald Trump.

Trump's political rallies, during which he often makes disparaging remarks about a person or group of people, are livestreamed on YouTube, as are other videos in which he does something that would theoretically break YouTube's new terms of service—such as mocking a reporter with a disability. One wonders what, if anything, YouTube will do about videos of this type, which Facebook's and Twitter's policies would leave in place.