I want to talk about software design. Specifically, I want to talk about how to design your products to resist the effects of evil.

I need to open this entry with a trigger warning. It isn’t possible to talk about defending against harassment without being exposed to it.

That said, here we go.

I strongly believe that I have a duty to try to prevent harm from coming to those who choose to use the things I design. This means that I need to think about the bad parts of the system, which often isn’t very pleasant.

I want to talk about Anita Sarkeesian and the horrible things that have been happening to her over the past years but first I feel like I need to establish some street credentials.

Back in the year 2011, several employees of the Wikimedia Foundation were put up on the site’s yearly fundraising banners. I was one of these people. I was a very successful banner candidate. I’ve written about this experience before but I wasn’t very expansive about the darker side.

Whenever my banner went up for a test run, I could literally feel the internet turn its attention to me like the fucking Eye of Sauron. Hundreds of tweets, LinkedIn views, Facebook posts. Pow, pow, pow. Lots of it was fun and exciting. Some of it was . . . not.

It’s a bit of a bummer to be told by random strangers that you look like a pedophile. Almost especially when they don’t know anything about you.

Back to Anita.

I don’t want to write about Gamergate or the state of the art of misogyny on the internet but I need to provide some context.

Anita Sarkeesian is a feminist game critic. She produces a series of educational videos about how sexism pervades the game industry. She does not, in any way, call for censorship or banning of topics or anything like that. She really only says, “just be aware of what’s happening here and maybe try to do better.”

For these statements, she has been continually bombarded with harassment through every possible means available to trolls on the internet.

In early 2015, she posted a blog entry detailing a single week’s worth of harassment. Scrolling through it is an inexhaustible stream of sewage and hatred. Some of it is ironically self-aware.

Let’s scroll through a miniscule amount of Anita’s harassment.

I dare you to click over and scroll through the full list. See if you can get through Monday.

Anita gets thousands of times more hatred than I ever did. I almost buckled under the weight of the sewage directed at me. I can’t imagine how strong she must be to keep going.

I’ve not been very scientific in my investigations but it appears that only about half of these accounts have been suspended or blocked. Not that such action matters much: these shit-goblins simply create a new anonymous account and let the good times roll again.

This is the face of evil. Beelzebub with the thousand eyes and mouths.

It’s a true failure on Twitter’s part. One they have acknowledged in public but (at the time of this writing) have done nothing to address.

When you design a product without understanding how it will be used for evil, you are designing poorly.

On Trolls

Let’s take a moment to understand the basic mindset of internet trolls. There are, as near as I can tell, three primary motivations that any one troll will have at a time.

Understanding these things will help you defend your users against them.

To Defeat the System

These trolls want to break the system just to break it. To do it for the lulz or the thrill of doing it. The desire to defeat systems (hacking or cracking them) is a deep part of hacker psyche. They aren’t necessarily motivated by evil but they often will open the door for others who are.

These people will find holes in your systems. They do it just to find them. But once they’ve found them, they nearly always share these holes with others.

To Subvert the System

Trolls who subvert a system intend to use it against the spirit of the system. This is often for laughs but sometimes it has very, very dark results.

In 2008, Christopher Poole was elected as the most influential person of 2008 by Time Magazine, beating out Barack Obama, because users of 4chan figured out how to game the voting software. This year’s Hugo Awards have been hijacked because someone figured out how to bend the rules to their favor. No big deal, right? No one is getting hurt, right?

Some horrible people use Secret to disseminate revenge and child porn. Secret’s not a great way to do bulk distribution of child porn, though. Embedding zip archives of this stuff into svg files and uploading them to a site like Flickr or the Wikimedia Commons may be, however. Maybe as large attachments in un-sent emails on any one of a thousand free-to-use web mailers.

To Weaponize the System

This is when your system or design is being used against you or another person in a hostile, damaging manner. This nearly always happens because of “Not Thinking It Through”.

This may not always happen directly in your product, mind. Data leakage may lead to someone being doxxed on another site, which may then lead to a swatting. Or worse.

Consider the proud young parent posting photos of their child at play to Facebook with open privacy settings. Are there things in that photo where a predator could identify the location?

Mitigation Strategies

How can you prevent your product or design or system from being abused? How can you deal with it?

Well, there’s no silver bullet on this. There are a series of strategies you can employ, though. Many will not apply. You will probably need to use multiple ones, each at differing degrees of strength or opacity.

Some of these strategies suck, but I’ll include them for completeness’ sake.

Ignore Everything

Just do jack and shit about it.

This is the worst strategy. You can do it – and some companies appear to remain successful while doing so. This is the way that car companies handle recalls: only deal when there’s sufficient blood on the pavement to affect the bottom line.

I personally find this to be odious and unethical.

Shut it Down

Just prevent anyone from doing it at all. This typically means shutting down your application entirely. It’s often a last-resort solution.

PostSecret had a short-lived application that allowed users to post their own photos and captions. It was pulled when people starting posting porn and gore because there were no features to limit this and there was insufficient moderation to work at scale.

This is not a good mitigation strategy because everyone loses.

Troll Personas

This is a strategy for understanding your weakness. Many design teams create personas for the users they want to service. The customers they want to have. Good personas are often an excellent tool for helping to understand the business needs of your product or market. These personas are almost universally nice, however, and always assume good faith on the part of the persona.

I say to you thus: you must always make at least one “troll” persona. You must learn to think like your enemy. Think about their motivations and how they will subvert your product to aid them.

Limit Feature Strength

This is reducing or intentionally crippling your product in order to protect your users.

Years ago I worked on a site that was intended as a social and games site for children. They wanted to have a chat system. Obviously, we wanted to make sure that foul language wasn’t a part of it.

It would be easy to write a series of regular expressions so that the chat catches and censors Carlin’s magic seven and all variations. It’s not so easy to catch “Hello, little girl, what time do you get out of school?” or “I am going to put you in a wood chipper.”

This is why Nintendo’s chat systems only allow you to pick from canned statements.

Banning Wrongdoers

Very simple. Have a very strong code-of-conduct and brook exactly zero violations. You must be merciless. You must not allow for rules-lawyering. Identify bad-actors and get rid of them.

Wikipedia has some editors who are simply horrible, toxic individuals. The way they conduct themselves and talk to new users drives new users away forever. They are allowed to remain because there is always some bullshit reason why the latest round of bad behavior is “okay”.

This is the type of behavior that creates gender gaps.

Educate Users

You can educate users as to the bad things that could potentially happen and things to prevent risk.

The biggest problem here is that no one wants to read a bunch of snooze-fest documentation. I didn’t join Facebook to have to take a class about it. Sometimes you can put up interstitial dialogs (like an end-user license agreement) but are you ever really sure that the user understands this?

Does the proud parent really understand that the photo of their daughter’s recital they just uploaded is geo-tagged? Did they think about the fact that they took it at the school? Do they really understand what “Friends of friends can see this” means?

Deny Anonymity

Simply prevent people from posting or using the service completely anonymously. Allowing pseudonymity is fine and even great (and recommended). Just make sure that there is a way to tie any activity back to a specific user.

Purely anonymous culture is fairly toxic so you don’t want that anywhere near you. There’s a reason moot stepped down from running 4chan. But you don’t want to force “real names”, either, because that will probably open you up to other scenarios (like dead-naming transexual people).

Access Control Systems

Give users controls over who can contact them and how. This nearly always requires both white and black lists to work along side a default setting.

Livejournal does this very well: my private posts are only readable by those I’ve set as “friends”, and I can even write elaborate rules about posting only to groups, or to specific people.

Facebook has this kind of fine control, too, but it falls apart very quickly. There are too many options and degrees of visibility and the lack of any serious group support makes managing access difficulty.

It should be terribly easy to add someone to a block list. Press-and-hold on a tweet and I can block it in one tap. Blocking someone on Secret, however, requires me to first read the offending secret (which usually contains a photo of gore or revenge porn), report it, and then I can block the user.

Shadow Reputation Systems

This is a great method but it requires a lot of research and technology. You’ll need to instrument everything in your product and identify several patterns of behavior used by your bad actors.

When your system sees someone engaging in these behaviors, you silently and secretly drop them off into the bucket. This is called shadow-banning or hell-banning.

For example, say your product is one that allows your users to rent out extra rooms in their apartments for short-term stays. If a new user joins your site and then their first several actions are to browse exclusively female profiles, you might be able to determine that they really aren’t there for the rooms but instead to creep on women. The system could then silently prevent messages they sent from arriving at their targets and they themselves may never appear in searches.

In order for shadow-bans to work, you cannot allow anonymous access to your site. You must sit behind a log-in wall. The reason is that if the banned user can see that their comments are not being seen, that they are invisible, they will know that they’ve been shadow-banned.

Ask Questions

When all is said and done, when you’ve set your ideas to paper, you have to sit down and ask yourself a very specific question:

How could this feature be exploited to harm someone?

Now, replace the word “could” with the word “will.”

How will this feature be exploited to harm someone?

You have to ask that question. You have to be unflinching about the answers, too.

Because if you don’t, someone else will.