It’s difficult to assess how effective YouTube’s policies will be, as the company didn’t specify how it plans to identify the offending videos, enforce the new rules, or punish offenders. As the Crowder incident highlighted, YouTube has been inconsistent in enforcing its existing community guidelines.

“The devil is in the enforcement—well known white supremacists and hateful content creators remain on the platform even after this policy announcement,” said Henry Fernandez, senior fellow at the Center for American Progress and member of Change the Terms, a coalition of civil rights groups, in a statement. “In order to end hateful activities on their platform, we urge YouTube to also develop adequate means to monitor and enforce these new important terms.”

Rebecca Lewis, an online extremism researcher at Data & Society who has written extensively about YouTube, is skeptical. “It is extremely difficult not to see the new YouTube policies in part as a way to change a negative PR narrative after refusing to address the harassment faced by [the Vox journalist],” said Lewis on Twitter. “The platforms have become very good at issuing PR statements about proposed changes that don't ultimately have much effect.”

LEARN MORE The WIRED Guide to Conspiracy Theories

As of Wednesday afternoon, white nationalists James Allsup and Jared George, who runs a channel called "The Golden One," said YouTube had prevented ads from appearing near their videos, but not banned them. The YouTube channels of David Duke, Richard Spencer, Lauren Southern, and many other white supremacist figures remain on the site.

YouTube did not respond to multiple requests for comment.

The ban will reportedly affect a broad swath of some of the most popular conspiracy and bigoted content posted to the site, which has long been a source of controversy for YouTube. Videos claiming that Jews secretly control the world—which are common on the site, and make up the backbone of numerous virulent conspiracy theories such as QAnon—will be removed, a YouTube spokesperson told The New York Times. The same goes for those that claim women are intellectually inferior to men—a popular claim among misogyny-driven groups like the incel community or MGTOW—and videos that espouse white supremacy.

Many of the groups affected by YouTube’s announcement gained traction online in part from the platform’s recommendation algorithm, which critics say plunged users deeper into extremist rabbit holes by serving up an increasingly polarizing stream of fringe content. An analysis of more than 60 popular far-right YouTubers conducted by Lewis, the Data & Society researcher, last fall concluded that the platform was “built to incentivize” the growth of polarizing political influencers like those whose videos will likely be affected by this change.

“YouTube monetizes influence for everyone, regardless of how harmful their belief systems are,” Lewis wrote in the report. “The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online—and in many cases, to generate advertising revenue—as long as it does not explicitly include slurs. YouTube also profits directly from features like Super Chat”—a feature which allows users to pay to pin a comment to live streams—”which often incentivizes ‘shocking’ content.”

Notably, YouTube says its efforts to stem the spread of hate speech will go beyond increased moderation. YouTube says it will expand a system it tested in January limiting recommendations for what it calls “borderline content” which doesn’t violate its community guidelines, but has been determined to be harmful.

YouTube says it will also begin promoting and recommending “authoritative” content from trusted sources like news outlets and other experts to users that interact with potentially problematic content. “For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the ‘watch next’ panel,” YouTube said.

The company also noted that channels that repeatedly brush up against YouTube’s new hate speech policies won’t be able to run ads or use other monetization features like SuperChat.

Though the new rules are technically effective immediately, YouTube says that enforcement might be delayed as it adjusts its moderation efforts. The service said it will “be gradually expanding coverage over the next several months.”

“Context matters,” YouTube noted in a blog post on the announcement, “so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events.”

More Great WIRED Stories