Peter Coffee

I’m occasionally able to show up in Sebastopol, north of San Francisco, for a “Foo Camp” gathering—named from the initials of “friends of O’Reilly,” but originally inspired by a desire to host a “foo bar”—at the invitation of publisher Tim O’Reilly, whose company’s books with their distinctive animal drawings on their covers can be found in the libraries of alpha geeks everywhere. Tim’s weekend “unconference” format invites both presentations and conversations: I provided a presentation this year, on themes similar to those of my contribution here this past April, but I was far more privileged to be in a room full of people with diverse views on “the singularity.”

We got together at the suggestion of infosec guru Ed Felten, who in my mind is the “Freedom to Tinker” guy even though he’s only one of the frequent contributors to that ongoing commentary. Ed is best known for his authoritative analyses of questions like the trustworthiness of voting machines and the truthfulness of various other software claims. You’ll understand why I wanted to absorb some of the overflow of brainpower in the room, in a room where Ed was just the guy in charge of the whiteboard, after he invited anyone interested to meet and decide “Is the Singularity Bullshit”?

It was pretty clear that Ed was talking about “the singularity” as made meme by Ray Kurzweil, who defined for many the terms of debate in his 2006 book “The Singularity is Near” – although Kurzweil was far from the first, and is even farther from being the only, to use this word for a whole cluster of related ideas and questions. We spent most of our hour that Saturday noon trading thoughts on what were the key components of any useful definition of the word. There were many thoughts, and many definitions: this is not just a matter of word games, because the answers to these questions could have real impact on both personal choices and macroeconomic policies.

Foo Camp etiquette discourages quoting individual participants, or sharing photos of the sessions, so I will not be quoting any of the people who were in the room that day. Suffice it to say that we wound up with thirteen different elements of what the singularity might entail, which we boiled down toward the end of the hour into five key consequences.

Kurzweil’s singularity is framed by the subtitle of his 2006 book, “When Humans Transcend Biology.” In its first chapter, he says that it is “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Crucially, he then adds that this is about “the law of accelerating returns” and the phenomenon of exponential growth.

Let’s look at those four things, in this order: the implications of “exponential”; the acceleration of change; the depth of its impact; and the key word, “irreversible.”

An everyday example of ‘exponential’ is the loose description of a “Moore’s Law rate” of improvement as something doubling every 18 months. Small differences in exponentiality turn quickly into enormous shifts. For example, I recently compared the best-selling desktop PC of 2015 (the HP 6305) to the 1981 IBM PC that most of us would consider the zero point: I found that across those 34 years, CPU speed (clock rate only) rose 20 per cent per year, compounded yearly, but memory grew by 40%/year and storage volume by 50%/year.

These differences in growth rate are, I would argue, the reason why PC sales are actually in decline: what people most want to do with their machines is today dominated by volume and velocity of data, not by quantity or intensity of computation, and the machines being sold are out of balance with current demands. A modern PC has a processor more than 750 times as fast as in 1981, but the memory whose contents that processor is asked to manipulate is more than 65,000 times as capacious.

Fortunately, bandwidth to a typical middle-class home has been growing at 45%/year (from episodic dial-up, to always-on broadband): many powerful data-manipulation services are now as close as the nearest WiFi access point. Why buy a multi-core PC capable of real-time video compression, when free services do it in the background as a side effect of uploading from your phone?

I just used, almost in passing, the word “free” – but this is perhaps the most surprising word of the connectivity revolution. An unreasonably thorough study of the movie “Blade Runner” includes the observation that in 1968, the movie “2001: A Space Odyssey” forecast video call prices in the title year at $1.33 per minute; in 1982, “Blade Runner” showed video calls in 2019 costing $2.50 per minute. This is a perfectly reasonable inflation rate of 3.6%/year – but today, three years before “Blade Runner” takes place, the actual prevailing cost of video calls (at the margin) is $0.00 per minute. “Some things about the future really are hard to predict,” Dave Addey observes: exponentiality surprises us, especially if we’re trying to predict one thing (analog phone service evolution) while another (digital transformation) is coming out of nowhere at even a slightly faster rate.

What about acceleration? Among the singularity’s defining ideas, we agreed during that energetic hour, is that increments of change create additional forces that make the next increment come more quickly. For example: when the world gets flatter, in Tom Friedman’s terms, experts find each other more easily and collaborate more effectively across a greater diversity of disciplines.

Eventually, argue Kurzweil and many others, that expertise manifests itself in a machine intelligence that starts to design its own Version N+1 far more quickly than humans can comprehend. This may already have happened, in principle: “Deep Blue was taught by chess grand players to play chess, but the DeepMind Go system taught itself how to play Go,” observed Salesforce’s chief data scientist Vitaly Gordon earlier this year.

Depth of impact is indicated by a working paper from the United States’ National Bureau of Economic Research, in which Loukas Karabarbounis and Brent Neiman of the Booth School of Business at the University of Chicago study the changing share of global income going to human labor. When the cost of capital is nearly zero, as it has been for the past several years, they find that this powerfully accelerates the replacement of people by technology across a broad range of countries and industries.

Inexpensive robots, with nearly free financing of their purchase, are rather a more impactful thing than free video calls – and they are within the scope of the five potential impacts that I said we concluded by discussing at Foo Camp. Those consequences were loss of control; loss of understanding; emergence of a world unsafe for humans; massive inequality among people; and a world in which only the cybernetically or biologically augmented human is economically relevant.

Any of these effects, it could be argued, is observable to some degree already or is clearly on the way. For example, self-driving cars will probably make us safer in the aggregate, but may soon be expected to sacrifice their own driver if that minimizes overall harm – and the typical driver will not be able to comprehend the algorithms that make that decision.

What, then, of irreversibility? Are we crossing a boundary, as suggested by the notion of “singularity” (and its astronomical root in the irreversible fall into a black hole), from which we can not simply decide at a future time to return? Karabarbounis and Neiman are academically circumspect on this point, but they conclude their paper with the comment: “Standard macroeconomic models do not allow for long-term trends in labor shares, a strong prediction which we show to be violated in the data since the early 1980s. We hope our results generate new frameworks and analyses useful for thinking about these future trends.” In layman’s terms, we’ve never been here before.

All we can do is try to look far, far ahead. Even children’s lessons and stories warn of what happens when you think it’s premature to plan for exponential change.