While YouTube, which has over 1.8 billion daily users, has long prohibited videos that promote violence or hatred against people based on their age, religious beliefs, gender, religion, immigration status, sexual orientation and other protected categories, the new hate speech policy will go further. The policies will specifically ban videos “alleging that a group is superior in order to justify discrimination, segregation, or exclusion,” based on those categories. That would include groups that “glorify Nazi ideology,” the company said in its announcement, because such beliefs were “inherently discriminatory.”

AD

AD

YouTube said the change in policy will result in the removal of thousands of channels.

Previously, the company had drawn a fine line between “hate” and “superiority,” choosing to limit the spread of white supremacy videos by not recommending them and not allowing advertising on them, but not removing them unless they expressly promoted violence. That approach allowed many videos to slip through the cracks. In 2018, The Washington Post reported that users on social media sites popular with hate groups, such as Gab.ai and 4chan, linked to YouTube more often than any other site.

Separately on Wednesday, YouTube said it wouldn’t allow the account spreading the homophobic and racist videos against the Vox reporter to earn advertising revenue anymore.

The company will also remove content denying that well-documented violent events took place, like the Holocaust or the school shooting at Sandy Hook Elementary School. Victims of those events will also be considered protected under the company’s new policies, as will people in protected castes in India, where certain castes are routinely subject to discrimination.

For years, advocates have asked YouTube to remove Holocaust denial and other white supremacist content.

AD

AD

“Online hate and extremism pose a significant threat -- weaponizing bigotry against marginalized communities, silencing voices through intimidation and acting as recruiting tools for hateful fringe groups,” said Jonathan Greenblatt, CEO of the advocacy group the Anti-Defamation League, which has been working with tech companies including YouTube to counter hate speech. "While this is an important step forward, this move alone is insufficient and must be followed by many more changes from YouTube and other tech companies to adequately counter the scourge of online hate and extremism.”

Wednesday’s decision by YouTube to wipe videos that distort or deny major world events that involve violence goes a step further than rivals Facebook and Twitter, which allow hoaxes on their platforms as long as they don’t promote or lead to violence. In Facebook’s case, the company includes a tab with additional reporting that debunks the false narrative. Twitter maintains that false information will be swiftly corrected by its user base of journalists and other fact-checkers.

YouTube also provides additional information about videos involving breaking news and some conspiratorial content, by including videos that it considers authoritative, either because they come from mainstream news organization or other vetted websites, on top of videos uploaded by everyday users. Wednesday’s announcement will also expand the effort to provide authoritative information about so-called borderline content.

AD

AD

Critics have said that providing a way for users to navigate to more accurate information does not go far enough to quell the problems caused by hoaxes. The issue came to a head recently, when Facebook refused to take down a video that distorted the speech of House Speaker Nancy Pelosi, making her look drunk, even as the video spread virally across the platform.

YouTube took down the video immediately because it has long prohibited altering videos with the purpose to deceive the public. (YouTube allows other hoaxes on the platform so long as they don’t promote violence or alter a video clip.) Twitter also kept the video up.

Facebook bans white supremacy, white nationalism, and other hateful ideologies, but Chief Executive Mark Zuckerberg has also defended leaving up content that could derive from that ideology, such as denying the Holocaust.

AD

AD

Silicon Valley companies have historically resisted playing an editorial role when it comes to user-generated content. Now, as tech companies come under greater scrutiny, they are more willing to take an active stance, either through artificial intelligence detection, human monitoring, or the promotion of sources of information the companies deem to be authoritative. But the steps the firms have taken have largely been incremental and often reactive – as in the case of YouTube’s decision on Wednesday.

After an outcry from advertisers in 2017, YouTube updated its policies to ban ads from appearing alongside content that is hateful, promotes discrimination, or disparaged or humiliated protected groups and moved to limit recommendations. The company claims that this step reduced the spread of those videos by 80 percent.

Critics charged that the company had higher standards for protecting advertisers than for the public, and that hateful videos were allowed to stay up on the site and spread widely even though they had lost ad revenue.

AD

AD