I.

By Ted Chiang, on Buzzfeed: The Real Danger To Civilization Isn’t AI: It’s Runaway Capitalism. Chiang’s science fiction is great and I highly recommend it. This article, not so much.

The gist seems to be: hypothetical superintelligent AIs sound a lot like modern capitalism. Both optimize relentlessly for their chosen goal (paperclips, money), while ignoring the whole complexity of human value.

It’s a good point, and I would have gone on to explain the more general idea of an optimization process. Evolution optimizes relentlessly for reproductive fitness, capitalism optimizes relentlessly for money, politics optimizes relentlessly for electability. Humans are sort of an optimization process, but such a weird edge case that “non-human optimizers” is a natural category to people more used to the human variety. Both future superintelligences and modern corporations are types of non-human optimizers, so they’ll naturally be similar in ways – though not so many ways that taking the comparison too far won’t carry you off a cliff. And one of those ways will be that even though they both know humans have complex values, they won’t care. Facebook “knows” that people enjoy meaningful offline relationships; after all, it’s made entirely of human subunits who know that. It’s just not incentivized to do anything with that knowledge. Future superintelligences will likely be in a similar position – see section 4.1 here.

But Chiang argues the analogy proves that AI fears are absurd. This is a really weird thing to do with an analogy. Science has always been a fertile source of metaphors. The Pentagon budget is a black hole. The rise of ISIS will start a chain reaction. Social responsibility is in our corporate DNA. But until now, nobody has tried to use scientific metaphor as evidence in scientific debates. For a long time astronomers were unsure whether black holes really existed. But nobody thought the argument that “the REAL black hole is the Pentagon budget!” deserved to be invited to the discussion.

Actually this is worse than that, because the analogy is based on real similarities of mechanism. “People say in the future we might have fusion power plants. But look at all these ways fusion power plants resemble stars! Obviously stars are the real fusion power plants. And so by this, we can know that the future will never contain fusion power.” Huh?

II.

Still, Chiang pursues this angle relentlessly. Though he doesn’t use the word, he bases his argument around the psychological concept of projection, where people trying to avoid thinking about their own attributes unconsciously attribute them to others:

Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted…It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own. Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

In my own psychiatric practice, I am always very reluctant to assume a patient is projecting unless I know them very well. I’ve written more about the dangers of defense mechanism narratives here, but the short version is that amateur therapists inevitably end up using them to trivialize or psychologize a patient’s real concerns. I can’t tell you how many morons hear a patient say “I think my husband hates our kids”, give some kind of galaxy-brain level interpretation like “Maybe what’s really going on is you unconsciously hate your kids, but it’s more comfortable for you to imagine this of your husband”, and then get absolutely shocked when the husband turns out to be abusing the kids.

Accusing an entire region of California of projection is a novel psychoanalytic manuever, and I’m not sure Chiang and Buzzfeed give it the caution it deserves. The problem isn’t that they don’t have a plausible-sounding argument. The problem is that this sort of hunting-for-resemblances is a known bug in the human brain. You can do it to anything, and it will always generate a plausible-sounding argument.

Don’t believe me? What about black holes? Scientists say they exist, but I think these scientists are just creating “a devil in their own image, a boogeyman whose excesses are precisely their own.” Think about it. Superstar physicists like Einstein help university STEM departments suck up all the resources that should go to the humanities and arts. So of course when Einstein tries to imagine outer space, he thinks of super-stars that suck up all the resources from surrounding areas!

And chain reactions! You know what was a chain reaction? Enrico Fermi discovered some stuff about atoms. Then Leo Szilard wrote a letter to President Roosevelt saying it might have military applications. Then Roosevelt set up a project to develop military applications. One thing led to another, and a couple of Japanese cities got vaporized and the rest of the world teetered on the brink of total annhilation. Of course nuclear physicists became obsessed with the idea of chain reactions: they were living in one. They expected that subatomic particles would behave the same way they did – start out working on innocent little atomic collisions, have everything snowball out of control, and end up culpable for a nuclear explosion.

Watson and Crick worked together pretty closely on the discovery of DNA. So they started imagining organic molecules doing the same thing they did – two of them, intertwining. Just as they published papers which became the inspiration for an entire body of knowledge, so DNA was full of letters that caused the existence of an entire body. Epigenetics is relevant but generally ignored for the sake of keeping things simple, so it represents Rosalind Franklin.

I could go on all day like this. In fact, I have: this was the central narrative of my novel Unsong, where the world runs on “the kabbalistic method” and correspondences between unlike domains are the royal road to knowledge. You know who else wrote a story about a world that ran on kabbalah? Ted Chiang. This is not a coincidence because nothing is ever a coincidence.

III.

But Chiang’s comparison isn’t even good kabbalah. The correspondences don’t really correspond; the match-ups don’t really match.

He bases his metaphor on the idea that worries about AI risk comes from Silicon Valley. They don’t. The tech community got interested later. The original version of the theory comes from Nick Bostrom, a professor at Oxford, and Eliezer Yudkowsky, who at the time I think was living in Chicago. It was pushed to public notice by leading AI scientists all around the world. And before it was endorsed by Silicon Valley tycoons, it was endorsed by philosophers like David Chalmers and scientists like Stephen Hawking.

(Hawking, by the way, discovered that information could escape black holes despite a bunch of science saying they should be completely inert. This seems suspiciously similar to how he himself is completely paralyzed, but manages to convey information to the outside world via an artificial speaking device. More projection?)

Forcing the argument to rely on “well, also lots of people in Silicon Valley think this too” makes it hopelessly weak.

Consider: lots of Hollywood celebrities speak out about global warming. And we’re gradually finding out that some pretty awful things go on in Hollywood. Does that mean “The Real Problem Isn’t Global Warming, It’s Hollywood Harassment”? Does that license some author to write (while scientists facepalm worldwide) that because he doesn’t feel like carbon dioxide should be able to warm the climate, any claims to the contrary must be Hollywood celebrities projecting their own moral inadequacies? (possible angle: celebrities’ utterances emit carbon dioxide, and create a stifling climate for women in the entertainment industry)

If this sounds like a straw man to you, I challenge you to come up with any way it differs from what Chiang is doing with AI risk. You take a scientific controversy over whether there’s a major global risk. You ignore the science and focus instead on a subregion of California that seems unusually concerned with it. You point out some bad behavior of that subregion of California. You kabbalistically connect it to the risk in question. Then you conclude that people worried about the risk are just peddling science fiction.

(wait, of course Chiang interprets this as people peddling science fiction. He’s a science fiction writer! More projection!)

If the Hollywood example sounds more blatant or less plausible than the AI example, I maintain it’s only because we’re already convinced global warming is real and dangerous. That gives it the same kind of legitimacy as self-service gas stations, and grants it extra resistance against sophistry. That’s all. That’s the whole difference.

This isn’t how risk assessment works. This isn’t how good truth-seeking works. Whether or not you believe in AI risk, you should be disappointed that this is how we deal with issues that could be catastrophic to get wrong.