Phil Torres is a “riskologist” who studies existential risk, and seems to have it in for the New Atheists (Salon, of course, is always willing to provide him with a platform for that). Torres’s latest Salon piece is an attack on Steve Pinker and his last book (Enlightenment Now, or EN), a piece called “Steven Pinker’s fake Enlightenment: His book is full of misleading claims and false assertions.” Torres’s piece is pugnacious, ending with a suggestion that Pinker may actually be hiding stuff that he knows is wrong:

Let me end with a call for action: Don’t assume that Pinker’s scholarship is reliable. Comb through particular sentences and citations for other hidden — or perhaps intentionally concealed — errors in “Enlightenment Now.” Doing so could be, well, enlightening.

When I read Torres’s piece, I wasn’t impressed, as Pinker’s “errors and false assertions” seemed to consist mainly of quotations used in EN that, claimed Torres, don’t accurately represent the actual views of the quoters (Torres contacted some of them). There were also differences between Pinker’s and Torres’s views on the dangers of artificial intelligence (AI), which are differences of opinion and not “misleading claims”. Torres proffered no substantive criticism of the data Pinker presents in EN to show progress in the moral and physical well being of our species. Those data, after all, are what support the main point of the book.

But I wrote to Steve, asking him what he thought about the Salon article. He replied yesterday, and I thought his reply was substantive enough that it deserved to be shared here. I asked him if I could post it, and he kindly agreed. Steve’s email to me is indented below.

Hi, Jerry,

Thanks for asking about the Torres article. Phil Torres is trying to make a career out of warning people about the existential threat that AI poses to humanity. Since EN evaluates and dismisses that threat, it poses an existential threat to Phil Torres’s career. Perhaps not surprisingly, Torres is obsessed with trying to discredit the book, despite an email exchange in which I already responded to the straws he was grasping.

His main objection is, of course, about the supposedly existential threat of AI. Unfortunately, his article provides no defense against the arguments I made in the “Existential Threats” chapter, just appeals to the authority of the people he agrees with. This is fortified by the rhetorical trick of calling the position he disagrees with “denialism,” hoping to steal some of the firepower of “climate denialism.” This is desperate: climate change is real, and accepted by 97% of climate scientists. The AI existential threat is completely hypothetical, and dismissed by most AI researchers; I provide a list and the results of a survey in the book.

Torres disputes my inclusion of Stuart Russell in the list, since Russell does worry about the risks of “poorly designed” AI systems, like the machine with the single goal of maximizing paperclips that then goes on to convert all reachable matter in the universe into paper clips. But in that same article, Russell states, “there are reasons for optimism,” and lists five ways in which the risks will be managed—which strike me as reasons why the apocalyptic fears were ill-conceived in the first place. I have a lot of respect for Russell as an AI researcher, but he uses a two-step common among AI-fear-sowers: set up a hypothetical danger by imagining outlandish technologies without obvious safeguards, then point out that we must have safeguards. Well, yes; that’s the point. If we built a system that was designed only to make paperclips without taking into account that people don’t want to be turned into paperclips, it might wreak havoc, but that’s exactly why no one would ever implement a machine with the single goal of making paperclips (just as no complex technology is ever implemented to accomplish only one goal, all other consequences be damned. Even my Cuisinart has a safety guard). An AI with a single goal is certainly A, but it is not in the least bit I.

The rest of Torres’s complaint consists of showing that some of the quotations I weave into the text come from people who don’t agree with me. OK, but so what? Either Torres misunderstands the nature of quotation or he’s desperate for ways of discrediting the book. The quotes in question were apt sayings, not empirical summaries or even attributions of positions, and I could just as easily have paraphrased them or found my own wording and left the author uncredited. Take the lovely quote from Eric Zencey (with whom I have corresponded for years), that “There is seduction in apocalyptic thinking. If one lives in the Last Days, one’s actions, one’s very life, take on historical meaning and no small measure of poignance.” In our correspondence, Zencey said, “I did caution about the narcissistic seductiveness of apocalyptic thinking,” and added “that doesn’t make it wrong.” Indeed, it doesn’t, but it’s still narcissistically seductive, which is why I quoted it, perfectly accurately in context. As I wrote to Zencey, I think his argument that we’re approaching an apocalypse is in fact wrong, since it relies on finite-resource, static-technology thinking, and ignores the human capacity for innovation. But there was no need to pick a fight with him in that passage, since I examined the issue in detail the chapter on The Environment. The bottom line is that I did not attribute to Zencey the position that apocalyptic fears are groundless, just that they are seductive (as he himself acknowledges), and he deserves credit for the observation.

Torres was similarly distracted by the quote from a New York Times article that “these grim facts should lead any reasonable person to conclude that humanity is screwed.” These pithy words, which I wove into an irreverent transition sentence, were meant to introduce the topic of the discussion, namely fatalism and its dangers. I certainly wasn’t claiming that the Times writer was agreeing with any particular position, let alone the entire argument! (Sometimes I think I should follow some advice from my first editor: “Never use irony. There will always be readers who don’t get it.”)

Just as pedantic is Torres’s cavil about the hypothetical (indeed, deliberately far-fetched) scenario of growing food under nuclear-fusion-powered lights after a global catastrophe. Torres multiplies the muddles: I was not claiming that anyone endorsed this sci-fi scenario (though a footnote credited the pair that thought up the idea), and my addition of nuclear fusion to the scenario is consistent, not inconsistent, with their observation that current electricity sources would be non-starters.

In a revealing passage, Torres seems to think that EN is about “optimism” versus “pessimism,” and defends his fellow runaway-AI speculators as “optimists” because they are the ones who believe that “if only we survive, our descendants could colonize the known universe, eliminate all disease, reverse aging, upload our minds to computers, radically enhance our cognition, and so on.” I don’t know whether we’ll ever colonize the known universe, but Torres is already writing from a different planet than the one I live on. It’s true that EN does not weigh apocalyptic sci-fi fantasies against utopian sci-fi fantasies. The threats I worry about are not AI turning us into paper clips but rather climate change, nuclear war, economic stagnation, and authoritarian populism. The progress I endorse is not colonizing the universe or uploading our minds to computers but protecting the Earth, eradicating specific infectious diseases, reducing autocracy, war, and violent crime, expanding education and human rights, and other worldly hopes.

As for the supposed scholarly errors: Torres pointed out that that the “Center for the Study of Existential Risk” should be “Centre for the Study of Existential Risks.” I thanked him and corrected it in the subsequent printing.

Thanks again, Jerry, for soliciting my response, and sorry for going on so long. If I had more time I would have made it shorter.

Best,

Steve