I’m not much of a video gamer (I spend my free time writing!), but I’ve been following the work of Anita Sarkeesian, a media critic whose non-profit group Feminist Frequency critiques sexism in popular culture, with a special focus on video games. Her series Tropes vs. Women in Video Games points out lazy, sexist clichés that game designers resort to repeatedly: damsels in distress, women in refrigerators, scenes of sexual violence just to make the game world darker and “grittier”, and so on.

But what’s astonishing is the onslaught of hatred and violent threats that’s been directed against Sarkeesian by knuckle-dragging misogynists in response to what is, honestly, a fairly mild critique. When she was raising money for her series, one angry sexist made a video game where players could (virtually) punch an image of her face. Most recently, she’s received death threats so violent and specific, she had to temporarily leave her own home.

Whether they know it or not, the trolls and misogynists who engage in these tactics have already lost the debate. By resorting to crude harassment and thuggish threats, just as religious fundamentalists do, they’re admitting that they have no valid argument – that they can’t rationally defend their regressive beliefs. Like all kinds of fanatics, the only thing they can do is to raise the cost of critiquing them by launching a barrage of verbal and emotional abuse at people who point out the ways they’re wrong, trying to silence their critics through fear rather than reason. But they try to make up for their intellectual vapidity with sheer volume, using the same harassing tactics on platform after platform.

Jezebel, a feminist news and opinion site that’s part of Gawker Media, was besieged by persistent trolls using anonymous accounts to post violent pornography in its comment sections. Despite repeated complaints from Jezebel’s writers (who also have to moderate comments on their posts), Gawker’s management ignored the problem until the writers united to pen a joint open letter of protest, which finally shamed them into introducing some fixes.

Twitter is another site with a chronic harassment problem, exacerbated by its open design that allows anyone to tweet at anyone else without restriction. This means that most prominent Twitter users, especially women, face a constant trickle (sometimes rising to a flood) of rape threats, violent misogyny and other abuse from no-name trolls. The only tools Twitter has offered are the ability to block and report harassing accounts, but both options are utterly useless since, again, harassers can create new accounts and be back within minutes. The single-minded obsession of the worst actors is well over the line into sociopathy: for example, the feminist writer Imani Gandy has written about one hateful individual who, has been creating multiple throwaway accounts every day for two years just to hurl racist and misogynist abuse at her.

Given its history of welcoming some of the worst people on the internet, it’s no surprise that Reddit, too, is a hive for this problem, as in the recent coordinated troll attacks on a subreddit for black women’s issues and another for rape survivors. Again, Reddit’s design aids and abets the bad actors, as moderators have no option but to individually ban accounts posting this garbage, which does nothing to stop the same individuals from creating new accounts and coming right back to resume their harassment.

I suggest that this should be dubbed the Mabus Problem, after a particularly infamous and persistent user of this tactic who waged a one-man campaign of harassment and death threats against prominent atheists and skeptics for years, using literally thousands of throwaway e-mail, bulletin board and social media accounts. He was finally shut down after a massive protest campaign prodded the police to arrest him, but there’s no way this solution can scale to encompass the volume of anonymous harassment that happens every day on the internet. (As many targets of harassment can attest, getting the police to act is a Sisyphean struggle, even in cases of clearly criminal behavior.)

What’s so frustrating is that, at least from a technological standpoint, this is a fixable problem. Trolling is really just another form of spam, and I can’t remember the last time a spam e-mail leaked through into my Gmail inbox, for example. There’s absolutely no reason, other than inertia and indifference to the problem, that social media sites can’t put similar safeguards in place. Of course, most technology companies are owned and staffed by white men who simply don’t experience online harassment to anywhere near the degree that women and minorities routinely do, so they have little incentive to care about it. Often, they’re oblivious even to the existence of the problem.

Stepping into the gap where social media companies have failed to act, there are third-party projects like Block Bot and Block Together, which allow blacklists to be shared by many users, but this is at best a partial solution. A truly comprehensive solution would require changes by the social media companies themselves. I can think of several simple and effective countermeasures against trollish harassment:

Encourage user accounts to accumulate history. Sites like Twitter, for example, could have a “filter out all tweets from accounts less than X days old” setting. Of course, trolls could still create and “age” throwaway accounts, but it would deprive them of immediate satisfaction, which I’m willing to bet would be a major disincentive to the typical immature internet misogynist. Alternatively, there could be a built-in waiting period, from several hours to several days, before a newly created account could be used. Again, the idea is to deprive trolls of the immediate gratification of getting to resume their harassment instantly when they’re blocked.

Block by IP address. A solution so simple it’s incredible more social media sites haven’t implemented it: allow a user to block not only a given account, but any other account with the same IP address. Since these addresses are relatively static, it would be harder and harder to usefully create new throwaway accounts for harassment.

Banned words lists for auto-blocking. Rather than manual, reactive blocking, which requires targets of harassment and abuse to absorb it all, why not make proactive blocking possible? Allow a user to define a list of banned words; anyone who directs a post at you using one of those words is automatically blocked. It wouldn’t be difficult to imagine what sorts of words and phrases could go on these lists.

Finally, the most effective solution would be true Bayesian filtering. This is what Gmail uses: algorithms that “learn” what’s bad by analyzing comments flagged as offensive and building up a statistical model of terms that most frequently appear in them. If harassers shift their tactics, the filter shifts with them.

If social media companies truly made it a priority to stop abuse and harassment of their users, they could put any of these measures in place in short order. What’s keeping them from doing it?