The company also said channels that “repeatedly brush up against our hate speech policies” but don’t violate them outright would be removed from YouTube’s advertising program, which allows channel owners to share in the advertising revenue their videos generate.

In addition to tightening its hate speech rules, YouTube announced that it would tweak its recommendation algorithm, the automated software that shows users videos based on their interests and past viewing habits. This algorithm is responsible for more than 70 percent of overall time spent on YouTube, and has been a major engine for the platform’s growth. But it has also drawn accusations of leading users down rabbit holes filled with extreme and divisive content, in an attempt to keep them watching and drive up the site’s use numbers.

“If the hate and intolerance and supremacy is a match, then YouTube is lighter fluid,” said Rashad Robinson, president of the civil rights nonprofit Color of Change. “YouTube and other platforms have been quite slow to address the structure they’ve created to incentivize hate.”

In response to the criticism, YouTube announced in January that it would recommend fewer objectionable videos, such as those with conspiracy theories about the Sept. 11, 2001, terrorist attacks and vaccine misinformation, a category it called “borderline content.” The YouTube spokesman said on Tuesday that the algorithm changes had resulted in a 50 percent drop in recommendations to such videos in the United States. He declined to share specific data about which videos YouTube considered “borderline.”

“Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward,” the company’s blog post said.

Other social media companies have faced criticism for allowing white supremacist content. Facebook recently banned a slew of accounts, including that of Paul Joseph Watson, a contributor to Infowars, and Laura Loomer, a far-right activist. Twitter bars violent extremist groups but allows some of their members to maintain personal accounts — for instance, the Ku Klux Klan was barred from Twitter in August, while its former leader David Duke remains on the service.

Twitter is studying whether the removal of content is effective in stemming the tide of radicalization online. A Twitter spokesman declined to comment on the study.