I’ve started and restarted writing this post so many times. I’ve spent a week agonizing over it. It’s so hard to write. There’s so much I want to say, and I don’t know how to say most of it or how to thread it together sensibly or how to draw any useful conclusions.

The hardest part is that I want to give examples — and I have loads in mind — but I don’t want to pick on anyone in particular. It’d be sort of counter-productive, given the subject matter.

I considered abandoning it altogether and just compressing the thoughts into an example-free tweetstorm, which sounded much easier. But it occurred to me I could just do that in vim and call it a blog post, and maybe it would end up less wordy besides. So here goes the somewhat compressed version. Ahem.

Something is very wrong with Internet discourse. It rapidly devolves into being bitter and spiteful and hostile, and this is only becoming more frequent. I don’t know where this is leading us or what to do about it. I don’t know dick about sociology but here are my perceptions anyway.

I’m a programmer, so my inclination is to blame computers for every problem. I think I’m onto something this time, though.

Human beings really, really like to cluster themselves into groups. This often happens naturally, due to physical barriers. We’re segregated into countries and cities (usually by geography); we know our coworkers better than we know people who work elsewhere; we recognize our neighbors. For artificial groups, we create our own barriers — churches for particular religions, clubhouses, other kinds of meeting spaces. Even people of the same political persuasion tend to visit the same restaurants.

Most violent conflict in human history boils down to: two opposing groups tried to occupy the same space.

Now, consider how this maps to the Internet.

We used to have services like LiveJournal, which put a big emphasis on group functionality. You could define a space, make it about a thing, and let people congregate in it. If someone was a dick in another space, who cared? They’re way over there. The separation was even greater with self-hosted forums and the like, which were on different domains with different logins and everything.

Fast forward to today, where two of the largest social platforms are Twitter and Tumblr. They’re both very powerful when used correctly, and have helped connect a lot of people who might never have crossed paths otherwise. But neither one has any real tools for formally defining groups. Tumblr has group blogs, but they’re little more than a way to share an account between a few people, and they’re so rarely used that many built-in themes don’t even support them correctly. Twitter has, what, lists? Neither site even supports comments — any response you write is brought into your space, rather than existing only within the original space.

The only distance between you and any other account among hundreds of millions is a click. Or not even that, if someone you follow retweets or reblogs them. Everyone is immediately adjacent to everyone else.

This has never really happened before, and we suck at dealing with it.

We think of ourselves as reasonable, and act like the way we handle most disputes is rational and sensible — it’s only those other people who are doing it wrong.

We’re completely full of it. Most of what we do serves only to emphasize group boundaries, and hopefully impress our peers so we win social points.

Insults are the easiest, even if they don’t quite sound like insults. Maybe the other part is “crazy” and not worth listening to. Or maybe you’re into social justice and don’t like stigmatizing mental illness, so you’ll use “bigot” instead. Ostensibly a very different meaning, but most of the time serves exactly the same goal.

We might say the other person has a “personal vendetta” or is a “sheep” or is part of a “hivemind” or otherwise question their motives. Someone who’s “closed minded” is not even worth arguing with, so don’t try.

Many of these words sound like they ought to be serious warnings of some kind, so we feel productive for using them, but they’re basically a first resort for a whole lot of discussion online.

Augh, look at this horrible thing this person said! Let me retweet it so all my followers can see how horrible they are. I’m providing a public service, after all. Be sure to read the rest of their tweets so you get the full impact.

Then we can all bond over how much we hate the same person.

There are some groups infamous for doing this, but I’ve seen it plenty outside of them. People I follow have done it. I’ve done it. Chances are you’ve done it. Ever dot-replied to a well-known person just to make a crack at their expense? Why?

When someone evokes a visceral reaction in us, we have a moment of dissonance. Why do we feel this strongly over a single tweet by a complete stranger?

Ah, there’s only one way to resolve this conflict. This person must truly be a Bad Person, because only a Bad Person would make us feel this way.

So we go hunting for “evidence” of how bad this person truly is. They deserve to have their life sifted through, after all, because they’re a Bad Person. And the more we find, the more of a Bad Person they are, so the more they deserve the ever-deeper digging.

Whenever this happens, there’s a shared sense that something noble is being accomplished, but I don’t know what it’s supposed to be. Perhaps that illusion of warning, of saving others from all the terrible things this person has done that only affect you because you went looking. But most of the time it seems that only the people who were doing the digging in the first place actually care about the results.

But everyone has skeletons in the closet, and it’s the nature of this game that the players want to exaggerate or speculate or outright invent offenses. The more dirt piles up, the more it’s about the size of the pile rather than the substance, and the less it matters whether each new thing is actually true.

It’s particularly bad for people who create and publish things on the Internet, because that’s a great way to leave a trail of stuff behind. The very people who make the Internet worth using are the same people who tend to attract the most vicious cruelty. We’ve grown up accustomed to creators who are millionaire actors on the covers of magazines, and we’ve been lulled into thinking that stalking their lives for others’ entertainment is perfectly fine by the sludge that adorns every supermarket checkstand. Now it’s much easier for smaller creators to have a tiny platform of their own, but at the same time it’s easier to rationalize treating them the same way. Everyone’s a public figure.

There’s a tension here, because people who have a lot of influence should be treated as more responsible for what they say. I don’t think this is how you do it.

This all sort of culminates in a strange game we play with outsiders. Sometimes it’s brief and mild; sometimes it’s an ongoing and horrific campaign. It’s always played the same way.

First you pick a target, someone you already don’t like for some frivolous reason. You and your peers keep an eye on them, and find a way to “catch” them out on breaking a rule with everything they do. The trick is that the rules don’t have to make any sense, and you don’t have to follow them yourself, as long as the other players buy it.

Maybe you condemn the target for reporting someone for personal reasons, then hunt for a reason to report the target. Or call the target’s silence evidence of their guilt, then later call their refutations evidence of their guilt. Or blame someone else’s actions on the target, then use your own complaint as evidence that the target is controlling and manipulative. Or do your best to upset the target while scoffing at how mean they are to other people. Or you accuse the target of transphobia while referring to them with incorrect pronouns. Or call out the target for being sexist and racist, then assume they’re a white guy when you call them fat and ugly. Or claim the target doxxed someone, then look up their address because they did it first.

The problem isn’t hypocrisy; hypocrisy is a boring crime, one of which everyone is guilty, one which has no interesting bearing on a discussion. The problem is that the game is designed as though there were a fixed set of reasonable rules that the target is willfully breaking, when in fact the rules are constantly changing to adapt to whatever the target does. The only real rule is that the target loses and everyone else gets to high-five over how much they’re winning.

This is the pinnacle of human interaction, as provided by our latest and greatest technology.

Formal groups gave us something else: the ability to truly eject someone from a space, to enforce those barriers we need to distinguish groups. Without them, it’s much more difficult to curate the experience of a group of like-minded people. Now all we can do is set up barriers around ourselves individually.

Even Reddit, which is built only around groups, has this problem: anyone can vote or comment anywhere else. There was a lot of recent outcry from moderators of popular subreddits, complaining that the moderation tools are woefully inadequate — it’s difficult to curate the groups.

This isn’t to say that group moderation is a panacea. I’ve seen quite a few small communities gradually unravel because the moderators erred on the side of not banning people who were clearly violating the spirit but not the letter of the rules. As jerks gradually leaked in, the people who didn’t want to be around jerks gradually drifted away, and the community was left as nothing more than a sludgy residue.

Unfortunately, that problem has persisted on a much grander scale, because now the only people with the ability to eject jerks are the platforms themselves: Twitter and Tumblr. And they really really don’t want to eject anyone, because they’re in the business of being as big as possible and staying that way, so they can promise investors they’ll somehow make money off of us.

Okay, that’s a bit cynical of me. There’s also some concern about free speech, much as there is in those smaller communities. Permanently removing someone from your space feels like a very extreme thing to do, and you don’t want to be the one to do it.

Yet as we’re waffling over the free speech of some jerk, people who don’t want to be around jerks are leaving anyway. What about their free speech? The effect is still that someone has left permanently. Or not showed up at all. There are several platforms infamous for their hostility, which I don’t want to invite onto myself, so I largely avoid them. So those places are losing out on my ideas, and my pushback against the hostility. Yet those places are some of the most proud about how much they value free speech.

This is a tension that crops up all too frequently and gets discussed all too rarely. Free speech is supposed to be about preserving the opportunity for all ideas to be expressed, but we take it so literally that we wring our hands when it comes to preventing anyone from saying any words in any way. We’re content to accept invisible forces, but get up in arms when visible forces try to compensate.

I can’t entirely blame this on computers. Computers exposed us to this brave new world, but ultimately it comes down to our inability to deal with it.

Negative input weighs on us much more heavily than positive. Maybe that’s to keep us in line when we really do break the rules of the group.

Except… it’s also much easier to act negatively towards a complete stranger than to act positively towards someone we respect.

And… something about our sense of when we ought to act is thrown off by these extremely large numbers. Much of our negative behavior is ignorable or even reasonable on an individual level. But sometimes we wind up in bizarre situations where hundreds or thousands of people are all committing very minor acts of cruelty towards the same person. From that person’s point of view it’s an avalanche, yet every person who contributed feels justified.

Consider, too: the very nature of Twitter and Tumblr makes it very easy to spread eye-catching information as far and fast as possible. Sometimes we spread important or fascinating things… sometimes we spread juicy gossip. And what happens if it turns out to be false? How many of the people who spread it originally are going to spread a retraction, or even become aware of it?

If the information was something negative about a person, how many of them will apologize to contributing to harming that person’s reputation? I’ve never heard this even considered. It’s hard to even comprehend. No one member of an anonymous, ad-hoc mob is expected to take responsibility for what that mob does, even though there would never have been a mob if they hadn’t all participated. This isn’t something we’ve had to think about much until now, when everyone is everyone else’s neighbor and we can all hear what’s being whispered over the fence.

I guess those were the important bits. I’m left feeling that I didn’t quite express what I wanted, but I can only rewrite this so many times.

I don’t know what to say about all this. People are complicated. I do this stuff too, and I keep trying to cut down on it, but these are some tough habits to break.

Maybe keep in mind that people are complicated. If you’re going to pick on something, pick on actions, not people. Picking on actions can set an example for others to follow. Picking on people encourages other people to go “well, at least I’m not that person”.

Meatspace strangers don’t act like this to each other, at least not nearly so often in my experience. Maybe because we chastise our peers when they jump straight to hostility. Maybe we need some more of that here. It’s curiously missing, like we fear being abandoned by all our friends if we tell them that one thing they did wasn’t cool.

That mean-spirited, irrational, destructive, and totally wrong person on the other end is still a human being. Don’t be a dick. Maybe leave them be, go create something amazing, and share it with the world instead.