When I spoke to James over the phone about the process, he described his aggressive behavior as a kind of dissociation — a moment of weakness where he stopped seeing those on the other end of the thread as real people. “My frustration expressed itself as insult diarrhea with no regard to whether I was being reasonable,” he said. He noted that he’d been back in the community for two months, is more conscious of his interactions and has yet to break the rules.

James isn’t convinced the process could work for everyone. He argued that mediation was effective for his specific personality type. “It’s the element of shame,” he said. “I’m somebody who feels guilt being confronted and it allowed me to see I was the one at fault.” Ms. Blackwell and Mr. Loewinger’s mixed results suggest success is far from guaranteed. Online, mediators have to deal with pseudonymous individuals, trolls and pranksters with no desire to reform. Even those dealing in good faith might bristle at having to apologize or confront their victims. Given the nature of online harassment and bullying, the restorative justice approach is full of pitfalls. Forcing targeted minorities or vulnerable users to confront abusers, for one, could increase trauma or put undue burden on victims.

Most daunting is the issue of scale. There’s simply no way to replicate the amount of time and effort involved with Ms. Blackwell and Mr. Loewinger’s experiment across the web. “It’s like trying to moderate a wild river,” an r/Christianity moderator said in the chat logs. “It’s only getting worse, too. I can’t even begin to evaluate all of this stuff.” The ceaseless torrent of posts and comments is why tech platforms are increasingly turning to algorithms and artificial intelligence to solve the problem.

But successful moderation — the kind that not only keeps a community from collapsing under the weight of its own toxicity but also creates a healthy forum — requires a human touch. Even skilled moderators assume a huge psychological burden; many working for Facebook and YouTube are outside contractors, subjected daily to torrents of psychologically traumatizing content and almost always without proper resources. Even in small communities, keeping the peace requires a herculean effort. A recent New Yorker article described the job of two human moderators of a midsize tech-news message board as an act of “relentless patience and good faith.”

This reality makes Ms. Blackwell and Mr. Loewinger’s experiment equal parts compelling and dispiriting. Mr. Loewinger remains optimistic. “It’s easy to write off all people who exhibit jerk-ish behavior online as pathological trolls,” he told me. “Dislodging that assumption might hold the key to a less toxic web. The James case demonstrated to me that people are open to reflecting on what they’ve done, especially when treated with dignity.” Ms. Blackwell argued that having reformed users back in the community actually makes the forums healthier. “We will never effectively reduce online harassment unless we address the underlying motivations for participating in abusive behavior, and having reformed violators go on to model prosocial norms is an incredible bonus,” she said.

But if reform means an abundance of shame and dignity on the internet, it’s hard not to feel that all is already lost. Still, the pair’s earnestness is refreshing. And at its core there’s a lesson: If fast, scalable algorithmic solutions gave us the broken system we’ve got, it’s stripped-down patience and humanity that have the best chance of pulling us out.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email:letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.