Dormehl starts with the 1964 World’s Fair — held only miles from where I lived as a high school student in Queens — evoking the anticipation of a nation working on sending a man to the moon. He identifies the early examples of artificial intelligence that captured my own excitement at the time, like IBM’s demonstrations of automated handwriting recognition and language translation. He writes as if he had been there.

Image

Dormehl describes the early bifurcation of the field into the Symbolic and Connectionist schools, and he captures key points that many historians miss, such as the uncanny confidence of Frank Rosenblatt, the Cornell professor who pioneered the first popular neural network (he called them “perceptrons”). I visited Rosenblatt in 1962 when I was 14, and he was indeed making fantastic claims for this technology, saying it would eventually perform a very wide range of tasks at human levels, including speech recognition, translation and even language comprehension. As Dormehl recounts, these claims were ridiculed at the time, and indeed the machine Rosenblatt showed me in 1962 couldn’t perform any of these things. In 1969, funding for the neural net field was obliterated for about two decades when Marvin Minsky and his M.I.T. colleague Seymour Papert published the book “Perceptrons,” which proved a theorem that perceptrons could not distinguish a connected figure (in which all parts are connected to each other) from a disconnected figure, something a human can do easily.

What Rosenblatt told me in 1962 was that the key to the perceptron achieving human levels of intelligence in many areas of learning was to stack the perceptrons in layers, with the output of one layer forming the input to the next. As it turns out, the Minsky-Papert perceptron theorem applies only to single-layer perceptrons. As Dormehl recounts, Rosenblatt died in 1971 without having had the chance to respond to Minsky and Papert’s book. It would be decades before multi-layer neural nets proved Rosenblatt’s prescience. Minsky was my mentor for 54 years until his death a year ago, and in recent years he lamented the “success” of his book and had become respectful of the recent gains in neural net technology. As Rosenblatt had predicted, neural nets were indeed providing near human-level (and in some cases superhuman levels) of performance on a wide range of intelligent tasks, from translating languages to driving cars to playing Go.

Dormehl examines the pending social and economic impact of artificial intelligence, for example on employment. He recounts the positive history of automation. In 1900, about 40 percent of American workers were employed on farms and over 20 percent in factories. By 2015, these figures had fallen to 2 percent on farms and 8.7 percent in factories. Yet for every job that was eliminated, we invented several new ones, with the work force growing from 24 million people (31 percent of the population in 1900) to 142 million (44 percent of the population in 2015). The average job today pays 11 times as much per hour in constant dollars as it did a century ago. Many economists are saying that while this may all be true, the future will be different because of the unprecedented acceleration of progress. Although expressing some cautions, Dormehl shares my optimism that we will be able to deploy artificial intelligence in the role of brain extenders to keep ahead of this economic curve. As he writes, “Barring some catastrophic risk, A.I. will represent an overall net positive for humanity when it comes to employment.”