Under intense criticism for their delayed reaction to disinformation and hateful content after the 2016 presidential election, technology companies have started to take a more proactive approach to disinformation and hate speech. In most cases, spokeswomen for Google and Facebook said the companies report white supremacist content only when it poses an imminent threat to life, or when they are complying with valid legal requests.

In May, Facebook evicted seven of its most controversial users, including Alex Jones, the conspiracy theorist and founder of Infowars, and Laura Loomer, a far-right activist.

But critics say that is not enough. “White supremacy, at least at Facebook, was seen as a political ideology that one could hold,” said Jessie Daniels, a sociology professor at the City University of New York and the author of a forthcoming book on white supremacy. “It’s only recently that they’ve said they recognize white supremacy as an ideology of violence.”

Ms. Daniels said there was an important lesson in what happened to Milo Yiannopoulos, the right-wing provocateur, after he was banned from using Twitter and Facebook.

“Milo has ceded from view since that happened. I really think that’s an argument in favor of this strategy,” she said. “He lost a book deal. He’s bankrupt. It showed ‘deplatforming’ is a useful tool and we need to find more ways to adopt it in the U.S.”

Ms. Daniels and others say the companies’ own algorithms for deciding what constitutes far-right extremist content are insufficient in tackling the threat. Often they rely on users, and in many cases names and content that percolate in the media, to decide what content and accounts should be taken down.

“We’ve reached this position where these companies have scaled beyond their capacity for safety. We don’t know what the next steps should be,” Ms. Daniels said. “We’re in a pretty significant bind now that we have a very large tech industry we all depend on, and we don’t feel we can trust them to keep us safe.”