0:33 Intro. [Recording date: November 16, 2015.] Russ: David Mindell... is the author of Our Robots, Ourselves: Robotics and the Myths of Autonomy.... Now, you open with the tragic story of Air France, Flight 447, and you use the crash and the recovery of the wreckage as symbolic of our interaction with autonomous technology and robots. Why that story and what does it teach us? Guest: Well, the Air France story is a story about a failed handoff, where the automation onboard an airplane found a relatively minor fault and handed control of the plane back to the human pilots, too suddenly and ungracefully surprised them. And they had lost some of their skills flying too much with the automated systems and lost control of the airplane. Which actually was a perfectly good airplane about a minute into the crisis. And so they went from tens of thousands of feet flying through the sky and ended up spiraling into the ocean, tragically losing all aboard. And that's a story about what can happen when automation is in a life-critical system, in an extreme environment, and the relationships between the humans and the machines are not properly engineered to exchange control in a graceful way. Now, interestingly, the wreckage of Air France was then found by another kind of autonomous vehicle, an autonomous underwater vehicle. And that vehicle was able to do things, still under the control of its human supervisors but that were difficult to do under other circumstances. Russ: On the crash: the pilots had thousands of hours of experience. Guest: Yeah. The pilots were experienced pilots. It was late at night. They were probably a little fatigued. They were probably maybe a little distracted. And all of a sudden they got handed this, you know, screaming airliner with lots of different alarms; hard to sort through what was really happening. One pilot pulled back on the stick; one pilot pushed forward on the stick. The captain himself was not even in the cockpit at the moment of the crisis. The accident report-- Russ: He got there fairly quickly-- Guest: cited total loss of cognitive control of the situation. Russ: And that was started--there was a set of protocols that were unleashed because of an icing in some of the, a part of the airplane, correct? Guest: That's right. The engineers who had programmed the system told the computer that it could not fly if there was ice on the pitot tubes. Actually, there are ways an airplane can fly without the data from the pitot tubes in fact[?] unmanned aircraft fly that way all the time, or at least they have the ability to fly that way. But the human programmers had said, 'If all the data coming in isn't perfect, then you've got to check out altogether.' And that's basically what the computer did. Russ: Which seems reasonable. Guest: Well, it seems reasonable, although not if you thought carefully about what scenario you were likely to hand the human pilots in a kind of distress situation without too much warning. Russ: So, I'm not an expert on aviation, but what struck me reading the story, which I had not read carefully before, is: Why doesn't this happen more often? Or does it, and people just recover sufficiently in that kind of situation? In other words, why aren't there alarms set off by computers--autonomous computer algorithms that hand off control of the airplane control to humans with a lot of uncertainty about what's really going on? Guest: Well, at some level it happens all the time. Humans are very good at adapting to these small little errors. The Air France 447, there's no question it was a kind of corner case, an extreme case. By and large, computerized airliners are very, very safe; they certainly have had a role in the tremendous decrease in accidents in commercial airline flight over the decades. At the same time, the computers are not perfect, and the people are constantly fidgeting around with them correcting small mistakes, you know, reacting to unanticipated situations. The FAA's (Federal Aviation Administration's) study of cockpit automation estimated that the number of commercial airline flights that go exactly according to plan is 10%. And in the rest of the 90% case there's always some change--change in routing, change in circumstances that the human pilots adapt to. Russ: So, I apologize to anyone listening to this episode who has downloaded it and is on a flight. But I guess even though it's only 10%, most of the time, an overwhelming percentage of the time, that handoff to human control goes fine. Guest: That's correct. And one of the things you can say is you say, an increasing number, proportion of airline accidents, it's true of automobile accidents, too, come from human error. And that's true, because the mechanical systems are becoming more and more reliable. But we tend to know a lot about accidents. You have to be very careful about studying these problems just by studying accidents. Accidents get a lot of attention. They get a lot of resources. They are studied very carefully, second by second. We know a lot less about normal operations, the sort of everyday thing that happened in the 40,000+ commercial airline flights that happen every single day in this country. And in those normal situations, people are constantly preventing accidents. Again--correcting small errors, correcting small failures, responding to changes in situations. And without the people in those loops, you'd probably have many, many more accidents. Russ: Yeah. I recently took a flight where on the outbound leg, the landing was so spectacular that the flight attendant said, 'That's how you land an airplane.' And we all applauded in appreciation. On the return trip, the pilot, it felt like, bounced the plane. As close, as [?] unpleasant a landing as I've had on a commercial flight. I've obviously not had many unpleasant experiences. But it was clearly--something had not gone correctly. We had no idea what it was. There was silence from the cockpit. I thought there'd be a 'Sorry about that, folks.' But it just--they taxied to the gate, and we dutifully, sheeplike, got off the plane. But something unusual happened there that we were unaware of. Guest: Mmmhmm. Yeah. You just don't know what that is. And you don't know--maybe there was an automatic landing system in one or both of those. Russ: Right. And as you point out--and the phrase is 'autopilot,' which is of course--this is an automated landing--can be done now. And takeoff, of course. And it's all, most of the time would work fine.

8:00 And what we're going to talk about today, those of you listening, is, we're going to talk about the role that autonomy plays in the advance of technology and robotics and we're going to get to driverless cars and a whole bunch of things. But I'd like to hear about your personal experience, which you cover in some detail in the book. Tell the listeners your own involvement in these extreme environments with semi-autonomous robotics. Guest: Sure. The book starts where my career started, which is in the very deep ocean. And I began as an engineer in the 1980s working with early robotic vehicles in the very deep ocean. And at the time we thought they were going to replace the manned vehicles and move eventually toward fully autonomous vehicles that would just go out and spend months at sea doing their work. And one thing that surprised us right away was that the robotic vehicles weren't cheaper and they weren't safer than the manned vehicles. For a couple of interesting reasons. But what they did, too, was fundamentally change the nature of the work. Woods Hole, which is where I was working at the time, operates, still operates a vehicle called Alvin, which is a manned submersible that--it takes 3 people in a 6-foot sphere, several miles down to the sea floor. And when you operate, Alvin, you have two observers or scientists, and then one pilot. They go down at 8:00 o'clock in the morning; they spend a couple of hours getting to the bottom; spend a few hours exploring the sea floor; and then spend a few hours coming back up. And then they return to the Mother Ship and tell everybody what they saw. What we found with a vehicle called Jason, where it descended as a robot but it had a fiber optic cable, connected all the way up to the ship--so everything you saw on the sea floor was relayed up to the ship in real time into a large kind of NASA-like [National Aeronautics and Space Administration] control room, and in that control room you could have 20 or 30 people experiencing the sea floor all together. And these would be scientists from different disciplines and engineers and even people from the media. And the experience of exploration changed quite radically. It was more of a group experience. More of a social experience. And then often you could connect by satellite link to hundreds of people in an auditorium somewhere back in the United States or anywhere in the world. And the whole experience changed rather radically. And those were not necessarily things that we anticipated. Not the traditional kind of automation, replaces the people. What it did do is push the people to a different place, and changed the nature of the work that they did. The robots didn't do any science on the ground. They didn't really do any exploration. But what they turned out to be really good at is digitizing the sea floor in incredible precision resolution. And that allowed the scientists to explore in the data from the comfort a computer workstation, often at a time that was even remote from several months from the time that the vehicle was collecting that data. Russ: Talk about how the scientists felt about that. Because that was a very interesting cultural phenomenon. Guest: Yeah. There was a lot of conflict over it over the course of the 1990s. A lot of scientists felt that remote science wasn't really science, that they really wanted to visit the sea floor. If you didn't actually physically inhabit the place you were studying, it wasn't really the right kind of science. Which is interesting, because I've dived in Alvin and submarines; and you can see very well out the window, but you don't really feel like you are in the place you are looking at. You are encased in a titanium or steel hull that's protecting you from the elements. So, it's a remote presence of its own kind. But it took a long time for people to accept that using robots to do remote exploration might actually also be exploration. Russ: And that really changed, you suggest, when some of the robots were able to give us a peek at some aspects of underwater events and tell us things we really didn't know much about that were really powerful. Guest: Yeah. There were certainly things that the robots could do that we just couldn't do. I mean, just before I joined this group, they sent a small robot, Jason Junior, down the grand staircase of the Titanic, which was much too dangerous for a human-occupied submersible to go down. They could get very close to hydrothermal vents. They could also stay down for days and days at a time--which vastly explored[?] the amount of time you get on the sea floor. And so there's a whole set of phenomena that are sort of different. And what we also found was that the move, the progress, was not necessarily toward full autonomy where robots that were just going off on their own. You always try to stay in touch with the robots as much as you could, over the course of the 1990s and the 2000s, we did in fact get autonomous underwater vehicles that didn't have the cables connected to them. But you still always wanted to talk to them, even if it was only a couple of bits at a time to keep an eye on them and let them know what to do. And then they would always come home and bring their data back, and you'd download the data, and again explore the sea floor by exploring the data. And those are the kinds of vehicles that surround[?] the Air France 447 wreck. Russ: 1307 I find it fascinating that--I have to say that I like Tom Hanks as an actor, but Castaway is one of my least favorite movies ever. And one of the things I liked least about it is when he names the basketball 'Wilson.' Or whatever he names--it's not a basketball. Something he finds. It's kind of an ad for the sporting goods company. You know, if I wanted to pretend I had a companion, I'd probably name it something other than 'Wilson.' But I find it interesting that these devices that are not sentient have human names--Alvin and Jason. They have acronyms, also, of course. Which you'll know by heart and I don't remember from reading the book. But, there's a certain--I don't know--affection there. Is there? What's that like? Guest: Well, yeah, that's-- Russ: Do you think of them in an emotional way? Guest: Uh-- Russ: You talk about when ABE (Autonomous Underwater Vehicle) died--got an obituary in The New York Times. So talk about that. Guest: Yeah, I think, speaking as an engineer who worked on these systems, I never found them sentient at all. And, you know, in fact they were really quite dumb and inert pieces of technology. And it was always a struggle just to get them to do just the simple things you wanted them to do and nothing else. And at the same time, there is a tension between the kind of public conversation about these robots. They were named by their inventors, to be sure. It wasn't something that was added by the Press. But there is a kind of break between the way that the people who are most closely connected to them think about them and the way that the stories get told about them. And that's true with the rovers on Mars, as well. The people who use them even describe them sometimes as robot geologists, even though they don't actually do any geology at all.

15:02 Russ: So, the book deals with different sets of extreme environments: air, space, water, and war. Let's talk about space for a second because you mentioned the Mars expedition. We have a tremendous--I think most people have a tremendous romance about space travel. But we've had some terrible accidents. People who have died. And so there's a natural impulse toward robotics rather than sending people to Mars. The movie The Martian has come out recently; I haven't seen it but we like the idea, thinking of traveling to other places. But, of course, they are extremely hostile. So, my question is: Do you see that continuing--the use of robotics for space exploration? And: How much autonomy is there on Mars? So, talk about the rover--it's not guided in the way that the submersibles are. So, talk about what it's doing that is somewhat autonomous, to the extent that it's autonomous at all. Guest: Well, so, with the rovers on Mars you have a 20 minute time delay between when the data is transmitted, either to or from Mars, and when it's received. And that translates to a 40-minute--more or less an hour--time delay, when you give a command, before you see the results. And practically speaking, for the Mars exploration rovers, that turned out to be sort of a once a day cycle--sort of, they would upload commands and then get the data back. Even given that, there's still a fairly limited amount of autonomy on the surface. The vehicles are not making much in the way of decisions on their own. They do some basic internal housekeeping--you know, if they lose touch with back home they'll go into certain predictable modes. And certain times the engineers who drive them from the ground will give them some autonomous features to maybe get around an obstacle in the short term. But for the most part, they are still guided from the ground with a fair amount of control. Even when they are autonomous, it's limited in time. You maybe say, 'Go do this, and think on your own for an hour,' a few hours, or a day or so, but you really want those things always reporting back home and always under the control of their human operators. That's one of the themes of the book, is the ways that autonomy can be very useful, but it's always constrained and it's always wrapped up in a human wrapper of sending instructions and receiving feedback or data. Russ: And on the moon--I think you said all but one or every one of the landings they turned off the robotics and did it by hand. Talk about what happened there. Guest: So, on Apollo 11, famously, about 200 feet above the surface, Neil Armstrong reached up and he switched off the automatic targeting feature. The computer and the lunar module were perfectly capable of landing in a kind of fully automated, hands-off mode. Armstrong turned off that feature and landed it, sort-of by hand: he had his hands on the joystick. But it was still a digital, fly-by-wire system: all of his commands were going through software and the computer was aiding him to a great degree in what he was doing. And then after Apollo 11, all 5 of the following commanders also turned off the automatic targeting at about that point. But they were still heavily dependent on the computer and heavily using these kind of digital fly-by-wire modes. And what was really interesting about that story is that in the Apollo story--it's one of the lessons of the book as well--is that it was a very innovative, very cutting edge, digital computer, one of the earliest uses of digital computers in an embedded sense, the way we use them all over the place today. And that highest level of technology did not mean that the landing was fully automated. Actually, the Russian spacecrafts of the time were very highly automated, because they had less sophisticated analog computers. The more sophisticated digital Apollo computers were actually used to create this very rich way of working, where the astronauts could turn off the targeting but keep the other digital modes. And that led me to a conclusion that, throughout the book, in all these other environments, that the highest form of technology is not full automation or full autonomy. But, it's automation and autonomy that are very, very beautifully, gracefully linked to the human operator--where the human can call for more automation as the situation demands it, call for less automation when the situation may not demand it as much. And the sort of perfect balance between the human control and the automatic control--that's really the thing we ought to be shooting for. Not necessarily kind of closing our eyes and falling asleep while our vehicles drive us around. Russ: I don't remember if you told us in the book, but did Armstrong ask permission? Guest: Um, he did not ask permission. He did not have to ask permission. Nobody was all that surprised that he turned that automatic mode off, actually. It was something they had all anticipated. And he had the command authority to do that. Russ: Because of course he's controlling the module. You could argue that Houston's controlling him. But they can't literally control him. I guess they could. They could in theory have some sort of override built into the system that wouldn't allow him to do it--under certain conditions it couldn't be turned off. But he made the decision to turn it off. Why did he do that? You suggest it wasn't ego. He was uneasy[?]. Guest: Yeah. When I first started writing about this, I thought it really was ego. But the more I looked at it, the more I talked to people who have actually done it, both on landing the lunar module on the moon, landing the space shuttle, even landing current-day airliners which have auto-land--these operators, who are very highly expert, really believe that if they are more in touch with what the machine is doing, they have a better chance of responding to something if there is a failure or some anomaly at the last moment. And that again, being deeply involved in these control loops, still dependent on software, still with all the computer aids, still with all the benefits that algorithms can provide for us, but keeping the person involved is something that greatly enhances the reliability and the safety of the system. There are always cases where the engineers who designed the system didn't foresee what might happen. You know, that's what's wonderful about the world--it always surprises us. And the best person to deal with that surprise is not necessarily a programmer working two years before, but the person whose rear end is on the line, who is physically in the environment, who can see what's going on. Russ: And in Armstrong's case, he was worried about the crater, the geography--not the right word--the topology of where he was about to be put. Right? At least that's what he said. Guest: Yeah. He could have actually still used the automatic targeting system to get over the crater. David Scott, who was the commander on Apollo 15 really put it well. He said, 'I came all that way, and I felt like I just needed to be involved for those last moments. It was my rear end on the line.' And again, the computer was beautifully programmed to really help the astronauts even when they had their hand on the stick in a variety of ways. The lunar module is physically impossible to fly it in a purely manual way. It had 16 thrusters and no human could command all 16 of those things in exactly the right way. So you had to fly it through the computer. Russ: You need two octopi to do it. For some reason that reminds me of Guardians of the Galaxy. I'm thinking of like, you know, an octopus with some kind of genetically modified skills. Let's put that to the side.

22:55 Russ: There is the issue of overconfidence there, and ego. Especially as technology advances, there is this--I don't know, a human hubris that the pilot can do it better. And sometimes I guess that's not true. I assume it's not true, sometimes. And that could be a problem. Guest: Yeah. I think in the--it's interesting--in the case of the Apollo landings, 6 of 6 attempts succeeded on landings. So, it's hard to argue with that case. Space shuttle-- Russ: Small sample-- Guest: also had an automatic landing feature--small sample, right, but that's what you got: it is 100% Russ: It's the maximum [?] level-- Guest: The space shuttle, too, had an automatic landing system that our taxpayers paid a lot of money to get developed and was never ever used, although all the space shuttle landings were successful as well. Interestingly enough, if you watch the first Star Wars movie which only came out 5 years after the end of the Apollo program, the climactic moment in that film is Luke Skywalker driving through the-- Russ: Spoiler coming, if you haven't seen the first Star Wars film, you want to turn this off, because the next episode is coming soon and you might want to watch from the beginning if you missed any. Warning! Warning! Put down. But go ahead. Guest: Luke Skywalker flies through the trench of the Death Star and at the last moment he turns off his automatic targeting computer and trusts the Force instead. And you see that in a lot of movies--Space Cowboys, same thing. I think it's Tommy Lee Jones turns off the computer and lands the space shuttle manually. That became a kind of narrative trope in science fiction after the Apollo landings. Russ: But it appeals to us deeply. I remember that vividly from the movie; in fact, when you mention it I get goose bumps and I don't believe in the Force. So, I find it interesting how that taps into our desire--I don't know what you want to call it--our romance about our abilities or about things that can't be explained. Yeah, it's kind of like I'm going to close my eyes and shoot: 'I made the game-winning shot when I closed my eyes because I just relied on my intuition.' There's a part of us to which that's deeply appealing. Guest: Yeah; I'm not really sure it's even-- Russ: Sometimes it's stupid-- Guest: so much about intuition as much as the--you know, any automated system is programmed by people. And those programs embody the people's assumptions about the world, worldviews about the world, models of who they think their users are. And to claim that the person who thought the problem through, again, years in advance from the comfort of a cubicle or testing lab somewhere had imagined every possible scenario and perfectly pictured every possible thing that can happen, is just a false claim. There is not always but very often things that can happen in the moment that were not anticipated, and people are very good at handling those kind of things. Not least of those is other people involved; and other people involved in the system. Russ: Yeah. I can't decide whether the fact that you are an engineer makes that claim more persuasive or less. If you were a coder I'd be more impressed. But I'm trying to think of your own experiences and background how biased you are or not against it, because obviously these systems-- Guest: I've written a lot of code for [?] system, I'll put it that way. Russ: Yeah, I remember now. These systems are really the synthesis of code and engineering skill, and of course as you say they can't anticipate--the best engineer and the best coder, no matter how smart they are, can't anticipate every situation, and particularly the interactions that you might have with other people that the machinery or robot can't handle. That's for sure. Guest: Exactly. I think--if you want to talk about a fully automated aircraft--take off from an airport, fly even through weather, even with an engine failure, and land at another airport--we solved that problem 20 years ago. That's a solved problem. But to try to do that where you take off from an airport where other people are using, other people are flying through the airspace; you are flying over the heads of people who might be at risk if you crash and landing at another airport where other people are--that's a problem we're only barely scratching the surface on. The autonomy kind of embedded environment. I call it, in the book, situated autonomy. How do we apply these autonomous systems where they have to live in a world where people are? This is the problem the FAA is dealing with, with drones and other unmanned aircraft. This is the problem of driverless cars. And it's a very rich, interesting problem. But not a simple one. And not one that's amenable to a very purely analytical solution. Russ: You remind me of the only time I've flown in a two-seater aircraft. We lifted off in about 5 seconds of taxiing; it was really exhilarating; we were floating through the air; it was a beautiful day. Heading maybe 150 or 200 mile flight. The person who was flying the plane was actually a coder. And as we got up in the air, I said sort of nonchalantly, 'So, how do we keep from crashing into other planes?' I was looking at the instrument panel, trying to figure out what he was using to avoid a glitch. He said, 'Well, you look around.' I thought, 'Okay, I guess I'll pay more attention.

28:43 Russ: But now, you argue that true autonomy is a myth. And we're living in a time when there's an immense amount of excitement--that we're on the cusp of good and bad kinds of autonomy. Why do you say it's a myth? Why aren't we going to get there? You suggest we are only going to get there--it's an asymptote, it's not a destination. Guest: Well, I didn't think true autonomy is a myth; I said full autonomy is a myth, where there's no human involvement. Because we have yet to build a system that has no human involvement. There's just human involvement displaced in space or displaced in time. Again, the coders who embed their world view and their assumptions into the machine, or any other kind of designer--every last little bracket or tire on a vehicle has the worldview of the humans who built it embedded into it. For any autonomous system, you can always find the wrapper of human activity sending out with instructions, coming back with data, or other things that the thing has. Otherwise the system isn't useful. So, to begin with, fully autonomy is an asymptote that way. But again, full autonomy as in the aircraft case--that's the easier case than autonomy in the human world--autonomy situated and responsive to all the complexity of living with other people around. And I think that's really the ultimate goal we should be working toward. It's very challenging. The Air France crash case gives you one of many cases of things that can go wrong in that situation. But we really ought to be thinking about achieving that perfect balance between the human and the autonomy. Because it's going to be there anyway, right? And there's a bunch of stories in the book, including the story of the Predator drone, where it was designed according to this sort of dream of full autonomy and at great expense and great difficulty for everyone involved; it ended up having to be embedded in the human world, like every other system. Russ: Yeah. I think we have a lot of--well, I have some misunderstandings about what drones actually do, and I think part of it's the word 'drone,' which makes it sound like it's off on its own looking for things to strike and kill. One way to think about this is if you imagine building a killing machine and launching it to say, 'If you see Osama bin Laden, take him out,' and then setting it off, that's just not realistic within any--not that it couldn't kill a lot of people. But we would be very uncomfortable doing that because of the uncertainty of it. And so you are suggesting--one of the themes of your books is the constraints we put on autonomy, especially when there are risks involved, and danger and safety and human life. Guest: Exactly. If you were to try to program a drone to go find Osama bin Laden, it would come down to a problem of watching people and interpreting their behavior. Russ: Yeah. Guest: And that's what actually a lot of the Predator and Reaper operators do: they spend a lot more time watching a house and seeing what they're doing, and it's a very tough problem that they're not all that well trained for, is: How do you interpret fuzzy images on video for intent? But people are still better at it than machines are. Because there's a context around it-- Russ: It's a human context-- Guest: and they have [?] or you may need the New York Times that morning to understand what the political situation. And AI (Artificial intelligence) has always had trouble with decision-making within a human context. The Predator and Reaper drones--I tell this story in the book--again, they are really not drones like you say. The Air Force actually banned the term 'unmanned' to talk about those vehicles, because they take hundreds of people to actually operate them. And it's actually a problem because they are so labor-intensive to operate them. They are remotely operated; they do do different things than manned aircraft do. They are really interesting and they raise a lot of interesting challenges, but the last thing they are is inhuman killing machines. Russ: I guess it's like a bullet. A bullet doesn't have a person: it doesn't have a knife thrust or a sabre or a spear. That's the first killing at a distance, and it's obviously directed by a person who aims and fires; and things go wrong. We understand that. Guest: Yeah. And the bullet is actually not a bad--it's an extreme case in some way, but it's a good illustration because the bullet is aimed and pointed by a person; it has a certain amount of autonomy once it leaves the barrel and the person doesn't have impact any more. But it's very short in time, very limited. And, when bullets go wrong is when they don't go where the person wants them to go. Right? So, the autonomy is the failure of the system. When bullets do the thing they want them to do is when they do exactly what the person wants. Same thing with the sort of smart bomb idea. When I--some of the early thinking on this book goes way back to first Gulf War in 1991; there were images on TV of smart bombs selectively destroying targets-- Russ: with tremendous precision-- Guest: with tremendous precision. So all the computers and all the lasers and all that technology, those are not the smart bombs. Those are the dumb bombs. Those are the ones that are going only where we want them to. The smart bombs are the really scary ones where you drop it out of the plane and you don't know where it's going to go, either because of the wind or some other failure of the system. That's what you don't want.

34:19 Russ: So, going back to this general question of full autonomy--you just mentioned the New York Times. Yesterday's New York Times had a feature on driverless cars. Here's a short quote: [F]ull autonomy is on the horizon. Google's self-¬driving cars have logged more than a million miles on public roads; Elon Musk of Tesla says he'll probably have a driverless passenger car by 2018.... What's your reaction to that? Guest: I don't think that's a realistic vision. I think there's any number of ways that you can see there are going to have to be human interventions in driverless cars. There certainly will be all kinds of automated features. Those are good things. They'll potentially improve the safety of driving. But to have a car that you drive down the highway at 80 miles an hour and sleep in the trunk while your kids are strapped in the back seat--I think we're a long way from that. For good reasons-- Russ: Are we a long way? Or no way--it's never going to happen? Guest: Well, I hesitate to say 'never,' but we have 30 or 40 examples in the book of systems that very smart engineers imagined as being fully autonomous and fully unmanned; and as they moved from the research lab into the field, they gradually got human interventions. Just think about it this way: Are you going to get into a driverless car that doesn't have a big, red Stop button for you to stop it in an emergency? What's it going to feel like when you can see things out in the world that are happening that the car is not recognizing the way that you want them to? Russ: But the car is going to be so smart. It's going to be able to recognize a squirrel from a toddler who strays off the sidewalk. And it's going to be pre-programmed to run the squirrel over, because I'm not--I'm more important than the squirrel--but the toddler, it will consult some ethical treatise in real time on Google and know whether to run the toddler over versus kill me: my age, maybe my contributions to society. In fact, it will sample the toddler's DNA (deoxyribonucleic acid) from a distance, figure out whether he's going to be a criminal or not, and know whether to--these are the kind of stories we tell ourselves. Guest: Yeah. Ask an owner of a Volkswagen diesel what it's like to feel like the software in your car maybe didn't show the values that you have. And how good are car companies and software companies at being transparent in their decision-making? So, think about: when you get into a car, you make a tradeoff between a number of different factors. Take risk and performance. Maybe you're late and you're willing to take a little risk and you drive a little more recklessly in order to try to get somewhere fast. Russ: Never. Guest: Maybe you pick up your kids at school and you turn the risk knob down and you say 'I'm going to drive a little more conservatively and be on the safety side.' You make those kind of decisions every time you get into a car. So does practically every autonomy algorithm. They work by optimizing cost functions: What is the balance between fuel efficiency and performance on this particular trip? And very often those values are in conflict with each other. Like, performance and fuel efficiency: get there fast is not the fastest. So, I think what you really want to see is systems that are designed where the user has input into those kind of decisions, where you have the control. Those decisions are going to get made somewhere, either by a programmer back in a cubicle somewhere or are they going to be made transparently in a way that a user can have input into them, so that the car drives according to your values and according to your priorities at any given moment. Russ: Well, I was thinking about your points about autonomy and how things advance but not as far as we might think. I can drive a stick shift. None of my kids can. And it crosses my mind that maybe their kids won't learn how to drive a car at all. In fact--I have four children; my last child at least so far just turned 15. And I wonder: wouldn't it be nice if I could live in that driverless car world and I wouldn't have to teach him how to drive? So, my dad taught me how to drive a stick shift. It was an unpleasant experience for both of us. Teaching my three other children how to drive has been a challenge. And it would be great to-- Guest: As long as that driverless car works perfectly under all conditions, everywhere, all the time. Right? And there's no question that when you have good bandwidth and you are near a cell tower and the sensors are all working at their highest order and the car was inspected last week and there's no ice on the sensors or bird poop on them, you ought to be able to have access to great features. But you really are going to have to be able to move in and out of those features--maybe you are driving away from high bandwidth cell links. Maybe you are driving on dirt roads that haven't been mapped. Maybe you are driving in a lot of different circumstances. You need to be able to move in and out of these autonomous modes. And that presents you with the Air France 447 problem. Which is a problem we should be working on, that we can improve on. But it's very hard to imagine a world where you get in and you have no possibility of having any input into the system. Why would you want to throw away that human insight? Russ: Well, I guess--let me rephrase your point. Obviously, if that toddler comes off the sidewalk and the car says, 'I can't handle this. Your turn,' that's the Air France problem in an extreme. That's not going to go very well no matter what, I don't care how prepared I am. There's really no attractive way to deal with that kind of--there's no easy way to think about that handoff. If it's more, 'Gee, it's kind of a foggy day today,' or 'The cell service is mediocre, the tower is mediocre; why don't you drive?' That's a different level. But I guess when you talk about it, given how poorly we drive now, I'd be willing to take a pretty big tradeoff of autonomy--I'd be willing to accept some very flawed autonomy rather than letting my 15-year old drive that car. So, there is a tradeoff there. You are suggesting that that tradeoff will never be attractive--I think you are suggesting that tradeoff will never be attractive enough to give up full autonomy. And I think what Google and Tesla and others, and to some extent Uber are betting on is that we'll get so close that we'll save so many lives that it will be a huge improvement. Guest: Yeah. You know--there's no evidence that we're going to save lives yet. There may well be. But again, we know a lot about accidents. We know a lot about aviation accidents and we know a lot about car accidents. And it is indeed true that a high proportion of the lives lost and the accidents in automobiles are caused by human error. But what we know a lot less about is how people drive under normal circumstances. And people are extremely good at sort of smoothing out the rough edges in these systems: the stop sign maybe is knocked over or a traffic light isn't working; and people have a way to kind of muddle through those situations. And they do that all the time. Again, back to that--of commercial airline flights, 10% of them proceed exactly according to plan. That's probably about true of your car trips as well. So, again, the claim is not that--I think there's a lot of things you can do with autonomous technology that are going to benefit cars and that you'll certainly want to be able to relax on the highway a little bit and let the car drive; and you'll certainly want to take advantage of all the sensors and the AI and the different robotic algorithms and techniques that are being developed in all these different realms. It's just a matter of whether you are going to be sleeping in the trunk or whether you are actually going to have the ability to stay involved in the system, and whether you can think about technology that will keep you engaged in the world and expand your experience outside of the car rather than push you back into a sort of rarified cocoon into the car with 100% faith in technology that has no possibility of failing.

43:10 Russ: We're already somewhat down that road with semi-autonomy: you have collision warning, you have lane-change warnings. And of course that encourages people to text. Or to talk on their phone, eat--many things people do that are semi-cocoon-like but not totally because they are still steering the car. But you suggest that Google has made a mistake--that might not be the right word, but they made a decision at least in their public statements that they are moving toward complete autonomy. Whether they get there or not, maybe we should be skeptical. But you are suggesting they should have tried a different model--maybe that's where they'll end up. How might that work? What are you--give us a little vision of that. Guest: Well, I think almost all the car companies are taking a different approach, right? I quote the senior leadership at BMW (Bavarian Motor Works) in my book saying, 'People buy our cars because they like to drive them. It would be crazy to get rid of that part of it.' And people like to be in control in different ways. And the automobile companies who are much more familiar with what it means to engineer and support and operate a life-critical kind of system out on the road are all taking a much more cautious approach to it. And I think you'll see it play out in the marketplace--these companies are all in competition. They are going to be vying with each other for the best position. There are certain components of all this. But again, the idea that you'll end up in a car that doesn't have a big red Stop button in it--it's hard to imagine that that's actually going to come to pass. That even regulators would allow that. And once you allow a big red Stop button, then you've got at least the beginnings of a handoff, and you have to begin to engineer that kind of handoff. Again, full autonomy is only going to work in that way, if it will work at all, with all of the perfect conditions, everywhere, all the time. And we know that's not how the world is wired. It's just not fully wired that way yet. That in itself means you'll be driving in and out of various levels of states of autonomy. That's how it should be. Russ: So, one argument would be, 'Well, there will be a red Stop button; there will be some override possibilities; there will be some training necessary maybe to get your driver's license still. It will be a little bit different and of course you'll be greatly aided by all those systems onboard. And maybe that red button gets pushed so rarely that it's just an uninteresting feature. I think the question is, for those of us who are overly enthusiastic, which would include me, because of things like Google's self-driving cars "have logged more than a million miles" on public roads. In your book you are very critical. Guest: None of it in the winter, by the way. Russ: Yeah. So, in your book, you suggest that a lot of the "evidence" that it's near is exaggerated or misleading. Why? Guest: Well, to begin with, again, that approach is an approach where you have to solve the problem 100% perfectly to do it at all. And that's just generally not been the approach that successful engineering systems have taken. And, I don't think the evidence is exaggerated. I haven't seen evidence that driverless cars have saved lives. They've also been driven heretofore almost exclusively lately, that kind of car, on well-ordered streets in northern California. Living in Boston, just last winter, the 3-D (3-dimensional) topography of the terrain changed by 9 feet overnight. Because you get three feet of snow, plowed up into 9-foot snow piles. And the directions of the streets change; the very way that people drive was changing rapidly. The Google car still relies on essentially perfect maps in order to make its way through the world, and there's an awful lot part of the world that's not perfectly mapped yet. And maybe never is, because maps are always changing in that way. Again, I think all the autonomous features are great things. I think they are going to come in. I think there's a good chance that some of them will improve the safety of driving, and they may introduce new risks as well. It's just hard for me to imagine that the person whose rear end is on the line, who is physically immersed in the environment and sees--has a situational awareness of what's going on, will never, ever possibly have anything to add to the situation. And we've never seen a system that's worked that way in the field. Russ: But then the question would be, as the general experience--let's say it's 2025 or 10 years from now when we've just recorded the 1000th episode of EconTalk, which would be really exciting. And we're talking about, say, we're elderly--I don't know how old you are but I'm 61, so I'll be 71--that's not good. So, let's talk about my parents. They are 85 now. They live in Huntsville, Alabama, and they are taking a drive to Memphis this morning. Which drives me crazy, because they drive themselves. And in 10 years, God willing, they'll be 95 and 93 years old. And they probably won't be able to drive their own cars. So they call on Uber to take them to Memphis. And will that Uber have a person driving them? Will there be a driver, or will they just get picked up by the equivalent of an actually drone car that will deliver them--I'm not suggesting they will go through the air. Amazon won't deliver them. Which is the other thing we hear that's imminent is that we'll have these things flying through the air autonomously. In 10 years will old people--and non-old people--mostly be able to go from point A to point B without having to--being able to surf the web and eat and hang out and chat? Or will there be a driver, whether it's them or somebody they've hired? Guest: I've got to ask Uber about that, I guess. Russ: Well, they're hoping. I think that's one of the reasons they are worth so much money. But if you were an engineer for them, do you think that's something you'd strive for? It would seem to be. Guest: I guess I would say: Whatever system they are involved in, they ought to have some ability to intervene if it's not doing what they want it to be doing. Russ: No, I think there will be such a system. I agree with you. But for the most part, most of the time, will we be traveling without any human direct intervention? Just like that plane you said--we've solved that problem, the takeoff and landing in an unoccupied-- Guest: Well, again: so the thesis of the book is we can learn about that future by looking at what people have had to do in extreme environments. And when you have a $150 million dollar airliner with a very highly certified crew and a very highly certified system of maintenance and parts control and all documentation and all that, we fly a great deal of those flights under the control of autonomy, and we still feel the need for people to be involved and monitoring, and fairly frequently taking over when human lives are at stake. I think that the mythology that the book really tries to tease apart is that we are moving from human to remote to autonomous, when actually I think what's happening is that the three modes are all converging. And so you will see cars that have autonomous features; you will see autonomous driving for certain times and certain places, for certain applications. But overall the driving system will be a mix of human and remote autonomous systems.

51:26 Russ: So, in the area of artificial intelligence generally--we've been talking mainly about robotics but in the area of artificial intelligence generally, do you think machines are getting smarter? Is that a meaningful question? We've had guests on this program who think there's a real possibility and there's very smart people who worry about this. I'm one of the less smart people who is not as worried, but there are very smart people--Elon Musk, Stephen Hawking, I think Bostrom, on this program, who have suggested that we have to really worry about machines getting so smart they become sentient or autonomous and pursue their own interests. Are you worried about that? Guest: I worry more about them pursuing the interests of the people who design them. However smart they may be. We have yet to build a machine that's not heavily influenced by its designers and the things that they built to do it. I think you are much more likely to get killed by a poorly designed robot than by an evil-thinking robot. Russ: Ever? I mean, I agree with you. I'm on your side. What do you think is worrying those folks I mentioned? Why do they think there's--I'm always thinking, well, can't you just unplug it? Why would you code it so that it would be able to do that to you? It would seem to me--it's hard to--there's this worry that--and excitement for some people--that it will just cross this threshold where it will start, you know, automating itself and grabbing people's kidneys and harvesting human beings. It's hard for me to say it without laughing. But there are actually--they lose sleep over it. Smart people do. What are they worried about that we're not worried about? Guest: Well, I think that's a legitimate--what you say, they are smart people; they are legitimately worried about it. As an engineer who has built these systems, I always find them frustratingly dumb. Not to say that they won't always be that way, but they are still fairly fragile, kind of brittle solutions, and most autonomous systems that we make, when they succeed brilliantly, they succeed brilliantly out of particular, well-thought-through, kind of narrow set of things. And they are very difficult for them to move outside of the context for which we've created them. And that's not to say we won't one day. But we still have a great deal of time building robots to do things beyond what they were built, designed for. Russ: Of course, they do get better. One of the interesting insights related to this question that people point out that things that we say are examples of artificial intelligence, people dismiss once they get achieved, and then say, 'Yeah, but they can't do this.' And you make the point that we get deceived by linearity: that we just assume that this kind of progress that we go from, you know, voice recognition and then say, 'Well, that's just mechanical; they can't do facial recognition.' But they are getting better at that, too. But you think the linearity itself is misleading. Why? Guest: I didn't really say that. I mean, I think there's no question that, you know, we've made progress in a lot of realms; some of it's quite astonishing and we can do much better with a lot of things than we could do 10 or even 5 years ago. And one of the things that I talk about in the book is again, robots working within social environments: How do they understand social relationships? How can they observe the people going in and out of a building and try to extract from that what those people's intentions are and what their plans are and whether those behaviors are normal and abnormal. Well, that depends on what you mean by normal or abnormal. I don't see a whole lot of progress in the computer science world at really understanding social relationships. There are a lot of smart people out there who study the social and the political worlds; and there's a great deal of knowledge there. I think there's still a lot of bridging to be done between the AI-robotics world and people who really richly understand human behavior and human relationships. And those things may all well be beginning; and when they do begin, I think we're in the--there's a lot of room there for progress. That's sort of what I argue in the book, again: If we can understand the social relationships between people and between people and machines, that's the road we want to march down. Again, I think some of the rhetoric around full autonomy shows that we're still actually quite primitive in the technical--that the technical community's understanding of the social world is still rather primitive. Russ: Well, the non-tech understanding of the social world is pretty primitive, too. There's nothing to be ashamed of, there.