In 1970, Life magazine published an article about a Stanford University research project that had resulted in the construction of what it called the first-ever “electronic person.” This creature, called Shakey, was a six-foot-tall robot on wheels, and it looked like a filing cabinet carrying around an elaborate video camera. It was an early experiment in artificial intelligence, funded by the Defense Advanced Research Project Agency, or DARPA—the technological research arm of the Pentagon—and conceived by the Canadian applied physicist Charles Rosen. It was the first robot designed to be entirely autonomous, to reason and make decisions based on information about its environment. Shakey was intended as a prototype for more advanced automatons that would eventually replace human beings in dangerous and hostile territories, and its makers saw it as the advance guard of a near future in which humans would be emulated, and eventually replaced, by intelligent machines. Shakey’s degree of autonomy was much more limited than that suggested by Life’s Promethean claims; its movements were slow and halting, and its battery tended to die after a few minutes of juddering operation. But many of the project’s innovations eventually entered the bloodstream of modern technology: the mapping software in your smartphone, for instance, was first used in Shakey, and Siri’s voice-command technology is a successor of a speech-control mechanism that was pioneered for the project.

Shakey is introduced in the early pages of John Markoff’s new book, “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots,” as an example of an ongoing conflict between artificial intelligence and the linked but divergent project of intelligence augmentation. (The robot was intended to replace people in specific situations, but its technologies wound up augmenting the intelligence, or at least the efficiency, of flesh-and-blood humans.) Markoff begins with the story of Bill Duvall, a young programmer hired to write code for Shakey. Duvall became frustrated with the limitations of the robotics project and decamped to another research group, just down the hall at Stanford Research Institute, which was engaged in an entirely different sort of enterprise, called the N.L.S., or “oN-Line System.” This project, led by a computer scientist named Doug Engelbart, was aimed at creating “an interactive system to capture knowledge and organize information in such a way that it would now be possible for a small group of people—scientists, engineers, educators—to create and collaborate more effectively.” The project, in other words, was an early version of the Internet. Not long after walking down the hall and leaving Shakey to its own limited and whirring devices, Duvall used Engelbart’s N.L.S. software to connect a computer in Menlo Park to one in Los Angeles via a data line rented from a phone company. “Bill Duvall,” as Markoff puts it, “would become the first to make the leap from research to replace humans with computers to using computers to augment the human intellect, and one of the first to stand on both sides of an invisible line that even today divides two rival, insular engineering communities.”

For Markoff, the difference between these two fields, A.I. and I.A., is the difference between a future in which human capabilities are enhanced by technology and one in which humans are made effectively obsolete, versioned out by the consequences of our own ingenuity. These two ways of thinking about our relationship with technology, he writes, have remained in a state of unresolved conflict: “One approach supplants humans with an increasingly powerful blend of computer hardware and software. The other extends our reach intellectually, economically, and socially using the same ingredients.”

Markoff’s argument, made in various ways using various examples—industrial mass production, robotics, machine learning, and so on—is that we have now reached a point in the development of these technologies where we can no longer avoid bridging this chasm between the A.I. and I.A. philosophies. Not to do so, he argues, would be to risk a future in which humans become effectively obsolete as meaningful actors in our own world. Perhaps the most compelling version of this dichotomy, or at least the most vivid and straightforward, is that presented by the likely advent of autonomous cars. Sebastian Thrun, the former director of the Stanford Artificial Intelligence Laboratory who now heads the Google self-driving car project, is evangelical about the extent to which a future of autonomous vehicles will prevent countless deaths and injuries from collisions caused by human error. Thrun is, of course, aware that his project, if successful, will wipe out a huge section of the employment economy: there will, in all likelihood, be no more commercial drivers—no more truckers, no more taxi drivers, no more couriers. This is an example of what Markoff refers to as a “human out-of-the-loop” model, in which software replaces human agency entirely, as opposed to augmenting the functioning of humans “in-the-loop,” as with things like assisted parking or G.P.S. technology. An entire realm of human activity will be obliterated in an act of wholesale abdication to machines.

I was in Pittsburgh last week for a book I’m working on, and much of my time there was spent being driven around in taxi cabs and Ubers. Some of the conversations were less engaging than others: I spent an hour in unyielding freeway traffic, for instance, with a woman who talked obsessively about doughnuts and doughnut fillings for pretty much the entire time, and I underwent a grueling trip to the airport with an Uber driver who delivered an oppressively boring monologue about his difficulty in getting his motivational-speaking career off the ground. I will admit that, on these occasions, my thoughts drifted toward a future in which I would sit in the back of an autonomous A-to-B device and be piloted, safely and silently, to my chosen destination. But I also had a lot of interesting conversations in cabs about Pittsburgh itself—about its past as a smoking forge of American industrial capitalism, its long years of decline and abandonment in the wake of the steel industry’s collapse, and its ongoing regeneration as a center of technology and medicine. And I found myself thinking, on these occasions, of that same future in a different way, of a world in which people would be replaced by machines, not just as drivers, but as meaningful actors in vast swaths of human economic and social endeavor. I was thinking, in other words, of how the world’s future might come to look something like Pittsburgh’s recent past.

In a discussion of the growing number of “lights out” factories—such as the Philips electric razor plant outside Amsterdam, in which human labor has been almost entirely replaced by robots—Markoff invokes Norbert Wiener, the M.I.T. mathematician who pioneered the field of cybernetics in the years immediately after the Second World War. Wiener is seen as a crucial figure in the creation of the computer age, but he was also, as Markoff points out, a speaker of unpalatable truths about the potential consequences of technology’s ascent. He warned that automation could reduce the value of a “routine” factory employee to the point that “he is not worth hiring at any price,” and that we might therefore be “in for an industrial revolution of unmitigated cruelty.” In the past, it has overwhelmingly been blue-collar workers whose jobs have been at risk from the mechanization of labor; the progress of artificial intelligence now is such that the intellectual labor of the white-collar professions will soon be equally under threat from intelligent, or at least competent, software. Accountancy, for instance, and large areas of the legal profession are facing the front line of this encroachment of machine intelligence. It’s hard to avoid an anxiety, in other words, that we’re all going to wind up out of the loop, and that the loop itself will consist of the machines and their owners. (It’s worth bearing in mind, in any discussion of the economic effects of automation, that the word “robot,” which was first used in a 1920 play by the Czech writer Karel Čapek, comes from the Czech word robota, meaning “slavery.”)