In an interview, two of the researchers who led the study told me that although they had only examined Reddit, their findings might be applicable to social networks like Facebook and Twitter, which tend to enforce their rules against individuals, rather than groups. They also tend to issue bans in a defensive, case-by-case manner, often in response to user-generated reports of bad behavior.

But the results of the study suggest that proactively shutting down nodes where hateful activity is concentrated may be more effective.

“Banning places where people congregate to engage in certain behaviors makes it harder for them to do so,” said Eshwar Chandrasekharan, a doctoral student at Georgia Tech and the study’s lead author.

Eric Gilbert, an associate professor at the University of Michigan and one of the researchers involved in the study, said that Reddit’s approach worked because it had a clear set of targets. “They didn’t ban people,” he said. “They didn’t ban words. They banned the spaces where those words were likely to be written down.”

This is, of course, a small case study — two Reddit forums out of millions of online spaces where antisocial behavior occurs — and methods for quantifying hate speech are still imperfect. (This study’s approach would have flagged one user chastising another for using a racist slur as hateful speech, if the slur were repeated as part of the chastising, for example.) The study also did not account for users who left Reddit altogether, some of whom may have continued to use hate speech elsewhere online.

Other online communities have had success with a broad-based approach to moderating hate speech. Discord, a private chat app, banned several large right-wing political chat rooms from its platform this year, after some of the speech turned hateful and violent. The bans did not entirely end hate speech on Discord, but they did break up these communities and made it harder for trolls to find and talk with one another.

There is no guarantee that a similar approach would work on a larger network. And there are risks to employing aggressive moderation tactics. Some platforms, such as YouTube, have been criticized when their hate speech filters have wrongly targeted videos posted by lesbian, gay, bisexual and transgender creators. Twitter’s banning of a number of alt-right activists en masse last year prompted a right-wing backlash. And Facebook’s security chief, who said last month that the social network shut down more than a million accounts every day, has also said that policing hate speech more aggressively would increase the number of “false positives,” or posts wrongly flagged as offensive.