Google knows there's a lot of extremist and hate-filled content on YouTube, and the company is now doing more to stop those videos from gaining traction. In a blog post yesterday, Google laid out four new steps it will take to work against extremist videos on YouTube, and most of those steps expand on current systems the company has in place to identify, flag, demonetize, and essentially hide hate-filled videos.

The most nebulous of the four measures is the third listed in the blog post, which states that Google and YouTube will take a "tougher stance" on videos that don't clearly violate YouTube's policies. The blog post describes these videos as containing "inflammatory religious or supremacist content"; those videos may not fall under YouTube's definition of hate speech, but they'll now be targeted in a similar way.

"In the future these will appear behind an interstitial warning and they will not be monetized, recommended or eligible for comments or user endorsements," Kent Walker, general counsel for Google, wrote in the blog post. "That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints."

This measure is pertinent considering the recent terror attacks in London. According to The New York Times, Khuram Shazad Butt, one of the perpetrators in the London Bridge attack, may have been influenced by YouTube video sermons by a Michigan-based Islamic cleric named Ahmad Musa Jibril.

A report by the International Center for the Study of Radicalization and Political Violence cites Jibril as one of the most prominent "new spiritual authorities" for foreign fighters and states he "does not explicitly call to violent jihad, but supports individual foreign fighters and justifies the Syrian conflict in highly emotive terms." This is that tempestuous gray area that Google and YouTube are targeting with the new measure.

More people, more machine learning

How are Google and YouTube taking this tougher stance against gray-area videos? The other measures outlined in the blog post explain further. Google will increase the amount of technology used to identify extremist videos, meaning the company will expand the sophistication of its machine learning technology that can tell the difference between extremist content and non-extremist content.

This is particularly important for any YouTube channel that discusses news and current events, because until now, those channels have been heavily demonetized because they talk about sensitive, violent, and extremist events while reporting the news. The blog post states that the company will now use "advanced machine learning research to train new 'content classifiers' to help us more quickly identify and remove extremist and terrorism-related content."

Google will also increase YouTube Trusted Flagger networking, which is a group of companies and individuals that have the power to flag videos with offensive content. Google will add 50 expert NGOs to the existing 63 organizations in the program and support them with additional grants. Lumped into the Trusted Flagger program is the YouTube Heroes program, which initially raised the eyebrows of many creators as it let any YouTube user apply to be someone who flags offensive content on the platform. Some creators have expressed concern that user biases could cause some of their videos to be unnecessarily flagged or demonetized. However, creators can initiate a review process for any demonetized video if they think it doesn't contain offensive material or content that violates YouTube policies.

The final measure from Google is an expansion of its counter-radicalization efforts on YouTube. This is arguably the most active and purposeful new step out of the four, as it expands on the "redirect method" originally developed by Google Ideas, a think tank now known as Jigsaw. Born last year, the "redirect method" essentially uses targeted advertising to guide potential ISIS recruits away from radicalization videos. Ads are placed next to search results for keywords and phrases that have been deemed ISIS-related, and when clicked, those ads bring the user to YouTube channels consisting of videos debunking ISIS teachings. The theory is that potential ISIS recruits will be dissuaded from learning about or joining the terrorist organization after watching those videos.

The "redirect method" is a new initiative that's barely one year old, but Google believes it has had a positive effect. While Google can't quantify how many potential ISIS recruits have abandoned their terrorism-related research, the company does claim "potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content."

Internet companies are under immense pressure to do something about hate speech on their platforms. Facebook is taking a similar approach to Google, pledging to use AI and more human moderators to eliminate extremist content on its website, while Twitter unabashedly suspends accounts that promote terrorism. Google's new measures are undoubtedly another response to the ad exodus that occurred earlier this year, in which many companies pulled advertising from YouTube after ads were found running over hate-filled videos. Until now, YouTube has focused on giving advertisers more tools to control where their ads appear and more clearly defining what "extremist" and "offensive" content means for their creators. But there is no longer any ambiguity on what Google and YouTube will be doing going forward—they will not only come down hard on offensive content, but they will also do more to ensure extremist content can't be easily found on YouTube.