There are a lot of reasons to hate Facebook.

They collect data on you, and sell it. They even do it when you’re not logged in to Facebook. They do it even if you don’t have a Facebook account. They manipulate their users with political bias. Their advertising algorithms promote virtual redlining, which is about the clearest possible example of “systemic racism” I can think of, given that it’s a system which requires no actual racist individuals to perpetuate. Their users are more vulnerable to identity theft. They lie to their advertisers, allegedly. They lie to Congress, allegedly. The Russia thing is popular these days.

That’s a lot of reasons. One might argue that many of those aren’t intentional, they’re just a function of the impossibility of managing and operating a platform as big as theirs. But you must admit, that’s way more reasons than MySpace had when it tanked.

But even all those reasons, in aggregate, are barely a scratch compared to the magnitude of the next one. To understand it, we need to spend a little time parsing ubernerd horror fiction.

Grokking the Scissor

Halloween 2018, a (thoroughly fictional) anonymous and highly distressed reader (this isn’t real) sent Scott Alexander (Slate Star Codex) a whale of a tale to put on his blog. (Scott wrote the story) The narrator of the story was a software developer in a software advertising company. Their firm had cooked up an idea that they should use deep learning artificial intelligence algorithms to discover the most controversial statement possible.

You should really go read this thing and come back. It’s fabulous. But if you’re time constrained, I’ll give you the gist in quotes.

[…] We trained a network to predict upvotes of Reddit posts based on their titles. Any predictive network doubles as a generative network. If you teach a neural net to recognize dogs, you can run it in reverse to get dog pictures. If you train a network to predict Reddit upvotes, you can run it in reverse to generate titles it predicts will be highly upvoted. We tried this and it was pretty funny. I don’t remember the exact wording, but for /r/politics it was something like “Donald Trump is no longer the president. All transgender people are the president.” For r/technology it was about Elon Musk saving Net Neutrality. You can also generate titles that will get maximum downvotes, but this is boring: it will just say things that sound like spam about penis pills. Reddit has a feature where you can sort posts by controversial. You can see the algorithm here, but tl;dr it multiplies magnitude of total votes (upvotes + downvotes) by balance (upvote:downvote ratio or vice versa, whichever is smaller) to highlight posts that provoke disagreement. Controversy sells, so we trained our network to predict this too. The project went to this new-ish Indian woman with a long name who went by Shiri, and she couldn’t get it to work, so our boss Brad sent me to help. Shiri had tested the network on the big 1.7 billion comment archive, and it had produced controversial-sounding hypothetical scenarios about US politics. So far so good.

Let’s pause the story for a moment, and talk about artificial neural networks, or ANNs. I have no training in these things save one grad school class a decade and a half ago, and I’ve never used them since. I’m no genius, but I can present a muggle’s view of how they work.

Image Credit: (link)

You have nodes. The data in the nodes on the left-hand side is input, like college football statistics. Just slap them in. Then there are layers of hidden nodes, each of which houses a mathematical function, with several variables you can monkey with like twisting a knob. Each operates on its input, turning it into a different number, and passes that number on to the next node. Maybe another hidden node, maybe the “output layer,” which is the answer you’re looking for, like the score of a college football game. Sometimes the node structure can be set up in a tangled way.

You input the statistics for the 2017 Alabama Georgia game, and it gives you the wrong score, so you go back into each of the nodes and monkey with those knobs until it gives you the right score. Then you stick in the statistics for the 2014 Clemson Syracuse game and it gives you the wrong score, so you tinker with the knobs again until it can spit out the score for both games reliably.

But since computers think very fast, you feed it every college football statistic from the last 50 years, as well as every result, in computational batches. You write another program to tweak the knobs for you, and you hit ‘go.’ Let it optimize for a week and it’s learned to predict college football scores, within some confidence interval. If Vegas hasn’t done this yet, call me.

The Shiri’s Scissor story is about building a network of these little nodes that takes the topic or title of a reddit conversation, runs layers of mathematical functions, and estimates how controversial the statement would be. Then, as the story goes, they run it backwards to generate perfectly controversial statements.

I’m not sure the ‘backwards’ thing really works, particularly in this context, but it’s a story so we suspend our disbelief and move on.

[…] Shiri’s problem was that she’d been testing the controversy-network on our [corporate] subreddit, and it would just spit out vacuously true or vacuously false statements. No controversy, no room for disagreement. The statement we were looking at that day was about a design choice in our code. I won’t tell you the specifics, but imagine you took every bad and wrong decision in the world, hard-coded them in the ugliest possible way, and then handed it to the end user with a big middle finger. Shiri’s Scissor spit out, as maximally controversial, the statement that we should design our product that way. We’d spent ten minutes arguing about exactly where the bug was, when Shiri said something about how she didn’t understand why the program was generating obviously true statements.

Here, the story gets creepy, because nobody in the room thought the idea was at all controversial until they started sharing their views of it with each other. When they did, they get into the conversational equivalent of an Asian Land War, which ends with Shiri’s termination of employment as well as another coder named David who sided with Shiri in the argument.

Only after that, did they realize that yes, indeed, Shiri’s Scissor (as they called it) had worked.

On them.

They had inadvertently discovered their first “Scissor Statement.” A statement so controversial it was guaranteed to tear a social group apart. Their social group. And they could train the AI to any data pool.

They called up DARPA and told them they had a superweapon. Shiri and David sue the company for wrongful termination. The company destabilizes Mozambique for the Army demo. The CEO gets in a fistfight with David, and the company is destroyed because of the one internal leak of that one original Scissor Statement, about how they should write their code. Ripped each other apart over an AI derived statement of maximum controversy. The story continues,

[…] We got off easy. That’s the takeaway I want to give here. We were unreasonably overwhelmingly lucky. If Shiri and I had started out by arguing about one of the US statements, we could have destroyed the country. If a giant like Google had developed Shiri’s Scissor, it would have destroyed Google. If the Scissor statement we generated hadn’t just been about a very specific piece of advertising software — if it had been about the tech industry in general, or business in general — we could have destroyed the economy.

The narrator gets a new job doing something unrelated, sits on what he knows, and hopes the whole thing has blown over.

[…] Then came the Kavanaugh hearings. Something about them gave me a sense of deja vu. The week of his testimony, I figured it out. Shiri had told me that when she ran the Scissor on the site in general, she’d just gotten some appropriate controversial US politics scenarios. She had shown me two or three of them as examples. One of them had been very specifically about this situation. A Republican Supreme Court nominee accused of committing sexual assault as a teenager. This made me freak out. Had somebody gotten hold of the Scissor and started using it on the US? Had that Pentagon colonel been paying more attention than he let on? But why would the Pentagon be trying to divide America? Had some enemy stolen it? I get the New York Times, obviously Putin was my first thought here. But how would Putin get Shiri’s Scissor? Was I remembering wrong?

The narrator rebuilds the Scissor in his spare time and confirms that not only was Kavanaugh in the list, so was Kaepernick, the Ground Zero Mosque, and the gay wedding cake baker. The story doesn’t mention the North Carolina transgender bathroom law, but I imagine it would qualify. Topics which are obviously true or obviously false until you speak to someone else who holds the opposite opinion.

[…] If you just read a Scissor statement off a list, it’s harmless. It just seems like a trivially true or trivially false thing. It doesn’t activate until you start discussing it with somebody. At first you just think they’re an imbecile. Then they call you an imbecile, and you want to defend yourself. Crescit eundo. You notice all the little ways they’re lying to you and themselves and their audience every time they open their mouth to defend their imbecilic opinion. Then you notice how all the lies are connected, that in order to keep getting the little things like the Scissor statement wrong, they have to drag in everything else. Eventually even that doesn’t work, they’ve just got to make everybody hate you so that nobody will even listen to your argument no matter how obviously true it is.

This may sound familiar, in the wake of the midterms.

[…] You guys, who haven’t heard a really bad Scissor statement yet and don’t know what it’s like — it’s easy for you to say “don’t let it manipulate you” or “we need a hard and fast policy of not letting ourselves fight over Scissor statements”. But how do you know you’re not in the wrong? How do you know there’s not an issue out there where, if you knew it, you would agree it would be better to just nuke the world and let us start over again from the sewer mutants, rather than let the sort of people who would support it continue to pollute the world with their presence? […] Delete Facebook. Delete Twitter. Throw away your cell phone. Unsubscribe from the newspaper. Tell your friends and relatives not to discuss politics or society. If they slip up, break off all contact. Then, buy canned food. Stockpile water. Learn to shoot a gun. If you can afford a bunker, get a bunker. Because one day, whoever keeps feeding us Scissor statements is going to release one of the bad ones.

Great ending. Super spooky. I love it.

It’s bullshit of course. The technology wouldn’t work. Reddit is not a great database to do this sort of thing because what divides people in controversy is more complicated than a title, so you’d need the AI to understand what it was reading. Also, it’s difficult to run these things in reverse and get results that aren’t nonsense.

But that got me thinking, if that doesn’t work, what could work?

Dr. Evil’s Folly, a Love Story

Dr. Evil wakes up one day, hungover from a late-night bender of attaching laser beams to the dorsal fins of ill-tempered sea bass. Typical Monday. And he says, “my nukes are wet, my ICBMs are rusted, my germ warfare division wet the bed in the ’08 crash, and there’s no such thing as a Nude Bomb.”

“What could I do to destabilize all of Western Society? I need one of them Shiri’s Scissor things. Except real.” And he sets to putting it in motion.

Evil Inc. Software Development Plan:

The first big problem with the fictional account of Shiri’s Scissor is that AIs really aren’t that good at giving us cohesive, creative results. They’re either one or the other. It’s either emulating results it’s been trained to emulate, or it has these creative outbursts that are sometimes compelling but still very foreign to humans, such as disturbing eyeball dogs.

Image Credit: Google Deep Dream, recovered from here.

So Evil Inc. breaks the problem into layers, with humans serving as nodes in the ANN. Evil Layer 1 is the content layer, where 1000 highly trained creators constantly spit out the most controversial content possible to serve to Layer 2. And because Evil, they pay these creators on commission based on how controversial their content is.

Evil Inc. then populates Layer 2 of the ANN with humans as well, but because Evil, they get these nodes off Craigslist.

A million of them.

Each content evaluator node is set up with a computer, an outrage feed from Layer 1, and a button that says “outraged” or “not outraged.” Then the ANN nodes are crosslinked, so any content flagged as “outrageous” gets forwarded on to another node in the layer. The same content might spill through multiple nodes, depending on how outrageous it is, increasing the commission for the people working in Layer 1.

Then, because don’t forget Evil, they decide they’re not even going to pay these poor sops from Craigslist. They hook a tiny IV to each of their arms, that injects an extremely small dose of a drug cocktail brewed up by Evil Inc. It’s a weak blend of heroin and cocaine, polished off with a chemical warfare thing from the 1970s that converts the concoction to pure unpolluted dopamine. But it’s a tiny dose.

The Craigslisters end up spending all their spare time giving Dr. Evil free labor in return for the dopamine hit, because they develop an addiction.

To tune this human powered Shiri’s Scissor, Evil Inc. sets up an algorithm where unused connections between the nodes fade out, and heavily used ones grow stronger. After this optimization, the most possible outrage will travel through the system. This causes the outrage evaluation nodes to bunch up. The tighter the groups draw, the better they can zero in on the outrage.

For the final evil architectural choice, they up-connect Layer 2 to Layer 1 in the hottest bunches, feeding the most outrageous content back to the content layer for reprocessing. It is the perfect outrage engine, and it takes Evil Inc. a decade to develop.

When Dr. Evil is finally ready to go live, he looks out the window of his volcano lair, and to his shock and dismay Mark Zuckerberg and Jack Dorsey beat him to it, with a human powered outrage engine a thousand times bigger.

Barely a Scratch

This is what Facebook and Twitter are, except with every media outlet in the world providing the content layer, on advertising commission, and nearly two billion dopamine addicted users providing the evaluation layer, for free.

Read that again.

They didn’t do this on purpose. It’s nobody’s fault. They made platforms for people to share cute videos of their cats, and that remains a large part of the content, but buried within it is Shiri’s Scissor.

The content layer is made of creators in the media which are paid by the click, view, or subscription. The evaluation layer consists of Facebook, Twitter, and other social media users, who get a tiny dopamine fix every time someone likes their post. They operate exactly like a neuron in a brain. They catch stories via their dendrites, which is their social media feed. If it outrages them, they share it, which fires it down their axon to other dendrites in the network.

This is the procedure ANNs simulate. Facebook and Twitter are simulating it too, almost identically.

Image Credit: Wikipedia, annotated

Then the social media platform adjusts itself to populate your feed with stuff you’re more likely to share, and its learning algorithms strengthen these connections into what Dr. Evil knows as his tuned subclusters of nodes, but what we know in the media as echo chambers. The echo chambers are a result of social media neuroplasticity.

This is what Dr. Evil’s optimization engine might look like, but we know through social media data analysis as an “echo chamber.”

And then for the icing on the cake, Twitter users can occasionally feed outrage back up the chain to the media, which can then literally run outrage stories to feed to the evaluation layer about Twitter feeds. Like this:

It’s all there, right down to the dopamine cocktail drip.

It doesn’t feed us the perfect, AI derived Scissor Statement, but it feeds us the nearest possible analogue, constantly, in real time. The ugliest feature about Zuckerberg’s Scissor is that by the time it’s identified the perfect Scissor Statement, the statement is already deployed, because the target destabilization group for the Scissor Statement is the Facebook users themselves.

And this isn’t just speculation. A 2011 study in the Journal of Marketing Research farmed a month of New York Times articles and did an analysis on this very topic — “what goes viral.” After controlling for variables such as article placement, author reputation, time on the front page, and such, they determined that anger and anxiety were two of the top three predictors for secondary article sharing. This is the Scissor at work. You can track it with mathematics.

Were they to replicate the analysis for 2018, which is far more toxic than 2011, and for Vox or Fox instead of the Times, and for Facebook and Twitter instead of email sharing, I suspect the profile would be tremendously worse.

This is a recipe for disaster. What happened in Myanmar can happen anywhere. It’s not “bad actors,” it’s the system itself. Everybody needs to quit using these things.