Twitter announced on Tuesday that it was expanding efforts to protect its users from abuse and harassment, including stopping banned users from creating new accounts and a new “safe search” feature.

In July, Twitter banned the conservative provocateur Milo Yiannopoulos, an editor of the rightwing site Breitbart News, for “participating in or inciting targeted abuse of individuals”. Twitter subsequently suspended the accounts of other prominent figureheads of the “alt-right” fringe movement, an amorphous mix of racism, white nationalism, xenophobia and anti-feminism.

Alt-right retaliates against Twitter ban by creating 'fake black accounts’ Read more

Twitter has been under fire for failing to address hate and abuse on the site since its founding a decade ago. Its reputation as a free speech haven has come into conflict with efforts to protect users.

The crackdown is not limited to far-right extremists. In August, Twitter said it had suspended about 360,000 accounts over the previous year for violating its policies banning the promotion of terrorism and violent extremism. But the company said the changes announced on Tuesday were “unrelated to that and focused on abuse and harassment”.

Twitter also said it was creating a “safe search” feature that removed tweets with potentially sensitive content and tweets from blocked and muted accounts from search results. The tweets will still exist on Twitter if people look for them, but they will not appear in general search results.

Twitter is also making some replies less visible so only the most relevant conversations surface.

Recently, other internet companies have also taken steps to curb abusive behavior and ban users who violate rules against promoting hate. Reddit banned a forum for white nationalists from its social news website last Wednesday. A message at the link for the “r/altright” subreddit attributed its ban to an impermissible “proliferation of personal and confidential information”.

And last week, the crowdfunding website GoFundMe removed a campaign for a conservative author and self-described “researcher” on the internet conspiracy theory known as “pizzagate”, which alleged with no evidence that Democrats were running a child sex ring out of a Washington DC pizza shop. Brittany Pettibone had launched her GoFundMe campaign for a video podcast about “traditional values that once made Western Civilization great”, including “love of one’s own culture, race and country”.

A GoFundMe spokesman, Bobby Whithorne, said in an email that Pettibone’s campaign was removed because it violated the company’s terms of service, which include rules against promoting hate, violence, harassment, discrimination, terrorism or “intolerance of any kind”. Pettibone, who declined to be interviewed, tweeted that GoFundMe had not specified how her campaign violated its terms of service.

Hate speech and promoting violence have long been barred under the terms of service of internet and social media companies such as Twitter and Facebook. But in the months leading up to the contentious presidential election, the emergence of the “alt-right” and high-profile trolling campaigns like one targeting the Ghostbusters star Leslie Jones thrust the issue to the forefront.

In November, for instance, AppNexus announced that it had removed Breitbart News from its online advertising network because it said the news outlet had violated its policy against hate speech.

Jennifer Grygiel, an assistant professor of communications at Syracuse University, said Twitter still relied too heavily on its users to root out and report abusive material.

“I have a simple fix: just hire a lot more humans,” Grygiel said.

Leaders of the Anti-Defamation League and the Southern Poverty Law Center (SPLC) say they frequently communicate with online companies to flag users spreading hate on their sites.

“This is a game that never seems to end,” said the SPLC’s Mark Potok. “It’s a bit of a whack-a-mole thing.”