MIT professor and historian David Mindell says Google’s utopian autonomy is a more brittle, less functional solution than a rich, human-centered automation. Do you agree?

If you follow technology news – or even if you don’t – you have probably heard that numerous companies have been trying to develop driverless cars for a decade or more. These fully automated vehicles could potentially be safer than regular cars, and might add various efficiencies to our roads, like smoother-flowing traffic.

Or so it is often claimed. But the promise of artificial intelligence, advanced sensors, and self-driving cars could be achieved without full autonomy, argue scholars with deep expertise in automation and technology – including David Mindell, an MIT professor and author of a new book on the subject.

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation.

MUST-READ: 5 Things You Might Not Know About Google’s Self-Driving Cars

“That’s just proven to be a loser of an approach in a lot of other domains,” Mindell says. “I’m not arguing this from first principles. There are 40 years’ worth of examples.”

Now Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing in MIT’s Program in Science, Technology, and Society, and also a professor in MIT’s Department of Aeronautics and Astronautics, has detailed the history in his new book, “Our Robots, Ourselves,” being published Oct. 13 by Viking Books.





Mindell’s new book, “ Mindell’s new book, “ Our Robots, Ourselves ,” offers a behind-the-scenes look at robotics, debunking myths and exploring the relationships between humans and machines.

To be clear, Mindell thinks that “it’s reasonable to hope” that technology will help cars “reduce the workload” of drivers in incremental ways in the future. But total automation, he thinks, is not the logical endpoint of vehicle development.

“The book is about a different idea of progress,” Mindell says. “There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research – but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.”

Shooting for the “Perfect 5”

To see why Mindell thinks history shows us that automation is not the endpoint of vehicular development, consider the case of undersea exploration. For decades, engineers and scientists thought that fully automated submersibles would be a step forward from the seemingly risky work of deep-sea journeys.

Instead, something unexpected happened with submersibles: Technological progress, including improved communications technologies, made it less useful to have fully automated vehicles sweeping across the sea floor. Instead, Mindell notes, submersibles “are more effective when they have even a little communication” with people monitoring and controlling them.

Or consider the Apollo program, which put U.S. astronauts on the moon six different times. Originally, Mindell notes, the expectation was that moon missions would be fully automated, with astronauts nothing more than passengers. But in the end – and partly due to the feedback of the astronauts themselves – astronauts handled many critical functions, including the moon landings.

“The sophistication of the computer and the software was used not to push people out, but to give them true control over the landing,” Mindell says.

And then there are airplanes. Commercial airliners do have many automated systems, such as cruise control-type features and even systems that can automate landings in certain circumstances. But it still takes highly trained pilots to manage those systems, make critical decisions in the cockpit – and, yes, frequently to steer the planes.

“Commercial aviation is incredibly safe,” says Mindell, himself a qualified civil aviation pilot with more than 1,000 hours of flying time to his credit. “Part of the reason is there are a lot of highly technical systems, but those systems are all imperfect, and the people are the glue that hold the system together. Airline pilots are constantly making small corrections, picking up mistakes, correcting the air traffic controllers.”

Drawing on a concept developed by MIT professor of mechanical engineering Tom Sheridan, Mindell notes that the level of automation in a project can be judged on a scale from 1 to 10 – and aiming for 10, he contends, does not necessarily lead to more success in any given endeavor, compared to a happy medium of technology and person. In the space program, Mindell reflects, “The digital computer in Apollo allowed them to make a less automated spacecraft that was closer to the perfect 5.”