Several state legislators have recently proposed cyberbullying measures. At the federal level, Representative Linda Sánchez, a Democrat from California, has introduced the Megan Meier Cyberbullying Prevention Act, which would make it a federal crime to send any communications with intent to cause “substantial emotional distress.” In June, Lori Drew pleaded not guilty to charges that she violated federal fraud laws by creating a false identity “to torment, harass, humiliate and embarrass” another user, and by violating MySpace’s terms of service. But hardly anyone bothers to read terms of service, and millions create false identities. “While Drew’s conduct is immoral, it is a very big stretch to call it illegal,” wrote the online-privacy expert Prof. Daniel J. Solove on the blog Concurring Opinions.

Many trolling practices, like prank-calling the Hendersons and intimidating Kathy Sierra, violate existing laws against harassment and threats. The difficulty is tracking down the perpetrators. In order to prosecute, investigators must subpoena sites and Internet service providers to learn the original author’s IP address, and from there, his legal identity. Local police departments generally don’t have the means to follow this digital trail, and federal investigators have their hands full with spam, terrorism, fraud and child pornography. But even if we had the resources to aggressively prosecute trolls, would we want to? Are we ready for an Internet where law enforcement keeps watch over every vituperative blog and backbiting comments section, ready to spring at the first hint of violence? Probably not. All vigorous debates shade into trolling at the perimeter; it is next to impossible to excise the trolling without snuffing out the debate.

If we can’t prosecute the trolling out of online anonymity, might there be some way to mitigate it with technology? One solution that has proved effective is “disemvoweling”  having message-board administrators remove the vowels from trollish comments, which gives trolls the visibility they crave while muddying their message. A broader answer is persistent pseudonymity, a system of nicknames that stay the same across multiple sites. This could reduce anonymity’s excesses while preserving its benefits for whistle-blowers and overseas dissenters. Ultimately, as Fortuny suggests, trolling will stop only when its audience stops taking trolls seriously. “People know to be deeply skeptical of what they read on the front of a supermarket tabloid,” says Dan Gillmor, who directs the Center for Citizen Media. “It should be even more so with anonymous comments. They shouldn’t start off with a credibility rating of, say, 0. It should be more like negative-30.”

Of course, none of these methods will be fail-safe as long as individuals like Fortuny construe human welfare the way they do. As we discussed the epilepsy hack, I asked Fortuny whether a person is obliged to give food to a starving stranger. No, Fortuny argued; no one is entitled to our sympathy or empathy. We can choose to give or withhold them as we see fit. “I can’t push you into the fire,” he explained, “but I can look at you while you’re burning in the fire and not be required to help.” Weeks later, after talking to his friend Zach, Fortuny began considering the deeper emotional forces that drove him to troll. The theory of the green hair, he said, “allows me to find people who do stupid things and turn them around. Zach asked if I thought I could turn my parents around. I almost broke down. The idea of them learning from their mistakes and becoming people that I could actually be proud of . . . it was overwhelming.” He continued: “It’s not that I do this because I hate them. I do this because I’m trying to save them.”

Weeks before my visit with Fortuny, I had lunch with “moot,” the young man who founded 4chan. After running the site under his pseudonym for five years, he recently revealed his legal name to be Christopher Poole. At lunch, Poole was quick to distance himself from the excesses of /b/. “Ultimately the power lies in the community to dictate its own standards,” he said. “All we do is provide a general framework.” He was optimistic about Robot9000, a new 4chan board with a combination of human and machine moderation. Users who make “unoriginal” or “low content” posts are banned from Robot9000 for periods that lengthen with each offense.

The posts on Robot9000 one morning were indeed far more substantive than /b/. With the cyborg moderation system silencing the trolls, 4chan had begun to display signs of linearity, coherence, a sense of collective enterprise. It was, in other words, robust. The anonymous hordes swapped lists of albums and novels; some had pretty good taste. Somebody tried to start a chess game: “I’ll start, e2 to e4,” which quickly devolved into riffage with moves like “Return to Sender,” “From Here to Infinity,” “Death to America” and a predictably indecent checkmate maneuver.

Shortly after 8 a.m., someone asked this:

“What makes a bad person? Or a good person? How do you know if you’re a bad person?”

Which prompted this:

“A good person is someone who follows the rules. A bad person is someone who doesn’t.”