0:33 Intro. [Recording date: December 8, 2014.] Russ: We're going to talk about human intelligence, artificial intelligence, building on a recent talk and article on the subject that he has done; whether we should be worried about artificial intelligence running amok. Gary, welcome to EconTalk. Guest: Thanks very much. I should mention, by the way that I have a more recent book that's very relevant, which is called The Future of the Brain: Essays by the World's Leading Neuroscientists. Maybe we'll touch on that. Russ: Excellent. We'll put a link up to it. Now, there've been a lot of really smart folks raising the alarm about artificial intelligence, or as it's usually called, AI. They are worried about it taking over the world, forcing humans into second-class status at best or maybe destroying the human race. Elon Musk and Stephen Hawking have both shown concern. And here at EconTalk I recently spoke with Nick Bostrom about the potential for superintelligence, which is what he calls it, to be an anti-human force that we would lose control of. So, let's start with where we stand now. What are the successes of artificial intelligence, what are its capabilities today in 2014? Guest: I think we're a long way from superintelligence. People have been working on AI for 50 or 60 years, depending on how you count. And we have some real success stories. Like, Google Translate--pretty impressive. You can put in a news story in any language you like, get the translation back in English. And you will at least figure out what the story was about. You probably won't get all the details right. Google Translate doesn't actually understand what it translates. It's parasitic on human translators. It tries to find sentences that are similar in some big database, and it sort of cuts and pastes things together. It's really cool that we have it. It's free. It's an amazing thing. It's a produce of artificial intelligence. But it's not truly intelligent. It can't answer a question about what it reads; it can't take a complicated sentence and translate it, that, into good English. Apparently I can't, either. It has problems. Even though it does what it does well. It's also typical of the kind of state of AI, which is kind of like it's an idiot savant. The savant that's mastered this critic[?] of translation without understanding anything deeper. So, Google Translate couldn't play chess. It couldn't ride a bicycle. It just does this one thing well. And that's characteristic of AI. You can think, for example, of chess computers. That's all they do. Watson is really good at playing Jeopardy, but IBM (International Business Machines), hasn't yet really mastered the art of applying it to other problems--working in medicine for example. But nobody would use Watson as their doctor just yet. So we have a lot of specialist computers that do particular problems. Superintelligence I think would at a minimum require things like the ability to confront a new problem and say, 'How do I solve that?' So, read up on Wikipedia and see. Superintelligence ought to be able to figure out how to put a car together, for example. We don't have an AI system that's anywhere near being able to do that. So, it's[?] in progress; but we also have to understand that the progress is limited. On some of the deeper questions, we still don't know how to build genuinely intelligent machines. Russ: Now, to be fair to AI and those who work on it, I think, I don't know who, someone made the observation but it's a thoughtful observation that any time we make progress--well, let me back up. People say, 'Well, computers can do this now, but they'll never be able to do xyz.' Then, when they learn to do xyz, they say, 'Well, of course. That's just an easy problem. But they'll never be able to do what you've just said'--say--'understand the question.' So, we've made a lot of progress, right, in a certain dimension. Google Translate is one example. Siri is another example. Wayz, is a really remarkable, direction-generating GPS (Global Positioning System) thing for helping you drive. They seem sort of smart. But as you point out, they are very narrowly smart. And they are not really smart. They are idiot savants. But one view says the glass is half full; we've made a lot of progress. And we should be optimistic about where we'll head in the future. Is it just a matter of time? Guest: Um, I think it probably is a matter of time. It's a question of whether are we talking decades or centuries. Kurzweil has talked about having AI in about 15 years from now. A true artificial intelligence. And that's not going to happen. It might happen in the century. It might happen somewhere in between. I don't think that it's in principle an impossible problem. I don't think that anybody in the AI community would argue that we are never going to get there. I think there have been some philosophers who have made that argument, but I don't think that the philosophers have made that argument in a compelling way. I do think eventually we will have machines that have the flexibility of human intelligence. Going back to something else that you said, I don't think it's actually the case that goalposts are shifting as much as you might think. So, it is true that there is this old thing that whatever used to be called AI is now just called engineering, once we can do it. Russ: Right. Guest: There's some truth in that. But there's also some truth in the fact that the early days of AI promised things that we still haven't achieved. Like there was a famous summer project to understand vision. Well, computers still don't do vision. And that was 50-some years ago. And computers can only do vision in limited ways, like met-camera[?] does face recognition, and that's helpful for its autofocus. Russ: Amazing. Guest: And you know, that's pretty cool. But there's no digital camera you can point out in the world and say, 'Watch what's going on and explain it to me.' There is actually a program that Google just released that does a little bit of that. But if you read the fine print, they don't give you any accuracy data. And then some really weird results there, that like, if a 2-year-old made errors like that you would bring them to a doctor and say, 'Is there some kind of brain damage here? Why is my 2-year-old doing this?'

6:32 Russ: So, we talked here in a recent episode, and you read, talked about it, the cat-recognition program that Google has. Not so good. Guest: So, the cat recognizer was the biggest neural network every constructed to date. It was on the front page of the New York Times about 2 years ago. Turns out that nobody is actually using it any more. The Times got very excited about something that was sort of a demo, but not really that rich. So, what it really would do is it would recognize cat faces of a particular sort. It wouldn't even recognize a line drawing of a cat face. It would just cluster together a bunch of similar stimuli. Well, I have a 2-year old; that's not what he does with cats. He doesn't just recognize this particular view of a cat. He can recognize many different views of cats. And he can recognize drawings of cats; he can recognize cartoons of cats. We don't know how to build any access to [?] that. Russ: So, what would Ray Kurzweil say in response--you know, he's an optimist, he thinks--in many dimensions; we'll talk about some of other ones as well. But he says it's "fifteen years away." Besides the fact that it makes it more fun to listen to him when he says that, what do you think his--what does he have in mind? Does he have something in mind? Guest: He's always talking about this exponential law. He's talking about Moore's Law. So, he's saying, 'Look at this; look at how much cheaper transistors have gotten, how many more we can pack in, how much faster computers have gotten.' And this is an acceleration here. He calls it the Law of Accelerating Returns, or something like that. And that's true for some things. But it's not for others. So, for a strong artificial intelligence, which is what we are really talking about, where you have a system that really is as flexible and clever as a person, you look over the last 50 years and you don't really see exponential growth. So, like, we have this chat bot called Siri. Back in the 1960s before I was born so it's a funny use of the word 'we', but the field had ELIZA that pretended to be a psychiatrist. And some people were fooled. Russ: And some people presumably got comfort from it. Guest: And some people presumably got comfort from it. But it didn't really understand what it was talking about. And it was really kind of a parlor trick. And if you talked to it for long enough you would realize. Now we have Eugene Goostman, that does a little bit better--"one that's earned the Turing test this year [?]". But did that by pretending to be a 13-year-old Russian boy who didn't know our culture and our language, but was basically a big evasion[?], as ELIZA was. It's not really any smarter. Siri is a little bit smarter than ELIZA, because it can tell you about the movies and maybe the weather and so forth. But I wouldn't say that Siri is an exponential increase on what it was before. I would say it's a lot incremental engineering for 50 years. But not anything like exponential important. I think Kurzweil conflates the exponential improvement in hardware--which is undeniable--with software, where we can exponentially improve certain things--[?] has gotten exponentially better. But on the hard problem of intelligence, of really understanding the world, being able to flexibly interpret it and act on it, we haven't made exponential progress. I mean, linear progress; and not even a lot of that.

9:34 Russ: So, let me raise an unattractive thought here. And I'll lump in myself in a different way, or at least my profession, to try to soften the ugliness of it. Isn't it impossible[?] that people that people who are involved in AI, who of course are the experts, are a little more optimistic about both the potential for progress and the impact of it on our lives? And maybe they ought to be, because they are self-interested. I think about economists-- Guest: Whoa. I should say that I am involved. I actually started a very quiet startup company. I would like to see AI as enhanced[?], from a personal process, perspective. I write in the AI journals; I just had accepted yesterday, in Communications of the ACM, which is one of the big journals; I have another one coming out in AM Magazine[?]. So, I mean, I am part of the field, now. I am kind of converted over from cognitive science to artificial intelligence in some ways. Russ: Well, that's okay. You're allowed to be self-reflective about your [?]. Guest: And I look around in the field, and a lot of people are really excited. And there a lot people that aren't. So, I'm running a workshop in Austin[?], co-running I should say, workshop in Austin about sequels to the Turing test[desk?]. This is coming up in January. And my co-organizers and I are just doing an interview, and we talked about why we did this. We are trying to build a sequel to the Turing test. And we all have this. And the field has gotten really good at building trees, but the forest isn't there yet. And I don't think you'll actually find that many people in the field that will disagree. Russ: No, I know; but in terms of the--and by the way, explain what the Turing Test is, for those who don't know. And we'll come back to it. Guest: The Turing Test is this famous test of Alan Turing, devised to say whether a computer was intelligent. And he did it in the days of B. F. Skinner and behaviorism, and so forth. And we wouldn't do it the way he did it. But he said, let's operationally define intelligence as, let's see if you can fool me into thinking you are actually a person, if you are actually a machine. And I don't think it's actually that meaningful a test. So, if we don't have that long a conversation, I can make a computer that kind of pretends to not be very smart; that's what this program Eugene Goostman did--not very smart or not very sophisticated, can be very paranoid, and so forth, and so evades the questions. All that's really showing is how easy it is to fool a person, but it's not actually a true measure of intelligence. It was a nice try but it was 60 years ago, before people really had computers, and somehow it's become this divine test. But it doesn't [?] with the times, which is the point of this session, that Manuela Veloso, Francesca Rossi, and I are running at the Triple[?] AI Society, the big artificial intelligence society.

12:26 Russ: Let me come back to this question of bias. What I was going to say is I think if you ask most economists how well we understand the business cycle, say, booms and busts, recessions, recoveries, depression, they'd say, well, we have a pretty good understanding but it's just a matter of time before we really master it. And I have a different perspective. I don't think it's just a matter of time. So I accept your point that there are certain people in AI who think we haven't gotten very far. But it seems to me that there are a lot of people in AI who think it's only a matter of time, and that the consequences are going to be enormous. They're not going to just be like a marginal improvement or marginal challenge. They "threaten the human race." Guest: Before we get to those consequences, which I actually do think are important, I'll just say that there's this very interesting [?] by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you. Russ: Yeah. Twenty is perfect. Let's go back to your point about the progress not being as exponential as, say, the hardware, as people might have hoped. You said it's been linear at best, maybe not so much. It seems to me that we've made very little progress on the qualitative aspect and a lot of progress on the quantitative aspect--which is what you'd expect. Right? You'd expect there to be a chess-playing program that can move more quickly, look at more moves, etc. A driverless car is a little bit more sophisticated, it seems to me: it requires maybe a different kind of processing in real time. Guest: Actually, driverless cars are really interesting because you could do it in different ways. Same with chess. You could imagine playing chess like people do. The Grand Masters only look at a few positions. It's really interesting that they're able to do that. Nobody knows how to program a machine to do that. Instead chess was solved in a different way, through brute force, through looking at lots of positions really fast, with some clever tricks about deciding which [?]; but looking at billions of positions rather than dozens. It turns out in driving you can also imagine a couple of ways to do it. One would be, you teach a machine to have, say, values about what a car is worth and what a person is worth, and you give it a 3-dimensional understanding of the geometry of the world, and all of these kinds of things. In a way, what Google's actually doing is coming closer to brute force: an enormous amount of data, a lot of canned coded cases, although I'm not exactly sure how they're doing it. And they rely on incredibly detailed road maps--much more detailed than the regular maps that you rely on. They rely on things down to a much finer degree--I don't know if it's by the inch or something like that, I don't have the exact data, which they don't share very freely. But from what I understand, the car can drive around in the Bay area because they have very detailed maps there. They wouldn't be able to drive in New York, because they don't have the same maps. And so they are relying on this specialist data rather than a general understanding of what it is to drive and interact with other objects and so forth. Russ: Yeah; I think it was David Autor who was talking about it here on EconTalk. He said it's more like a train on tracks than it is like the way a person drives. Guest: Yeah; it's a very good analogy. Russ: And--so let's talk about that non-brute force strategy for a little bit. I think a lot of people believe that it's just a matter of time before we understand the chemistry and biology and physics of the brain, and we'll be able to replicate that in a box, and make a really good one--or a really big one--so that it would look at a dozen moves in a chess game and just go, 'Oh, yeah.' It would have what we call intuition. What are your thoughts on that? Guest: Well, my new book, The Future of the Brain, which is an edited book with a lot of contributors, not just me, is partly on that question. And there are several things I would say, kind of bringing together what everybody there has written. The first is: nobody thinks that we are that close to that. So, people are trying to figure out how to look, for example, at one cubic millimeter of cortex and figure out what's going on there. And people will be thrilled if we can do that in the next decade. Not that many people think we'll really get that far. So there's a lot of question about how long it will take in order to have, say, a complete wiring diagram. And where we are now is we have some idea about how to make a wiring diagram where we don't actually know what the units are. So, imagine you have a diagram for a radio but I've obscured what's a resistor, what's a transistor, and so forth. You just know something goes here. Well, that's not going to tell you very much. People are aware of the problem. So, part of the Brain Initiative is sponsoring programs to figure out what kinds of neurons do we have. How many different kinds of neurons are there in the brain? We don't even know that yet. A lot of people think it's like 800 or 1000. We don't know what they're all there for, or why there are so many different ones. We know there's an enormous amount of diversity in the brain, but we don't have at all a handle on what the diversity is about. So, that's one issue: when will we actually have enough data? And a sub-question there is: Can we ever get it from the living human brain? So, we can cut up mammals' brains and most people won't get too upset about it. But nobody's going to cut up their living relatives in order to figure out how the brain works. Russ: They might want to. It's gauche. Guest: They might want to. Most people are going to draw the line there. So, there's actually interesting things you can do. Like, you can take some brain tissue from people with epilepsy where they have to remove part of the brain. And you don't want to sort of cut too little out because then you leave things in and sort of like removing a tumor: it's a kind of delicate balance. So you get some extra brain tissue from living human brains that you can look at. It's not that we have 0 data. But it's pretty difficult to get the data that we need. And then, if you have it in a dish it's not the same thing as having it in the live brain. So, it's not clear when we are going to get the data that we would need to do a complete simulation of the human brain. I'm willing to go on record as betting that that won't happen in the next decade, and maybe not the next 2 decades. Then you have the question of how do they put it all together in a simulation. And there are people working on the question; it's a very interesting question but it's a pretty hard one. And even if people figure out what they need to do, which requires figuring out what level of analysis to have, which is something your economic audience would understand--like, do you want to model things at the level of the individual or the corporation; what's my sampling unit? Well, that comes up in the brain. So, do I want to model things at the level of the individual neuron, the individual synapse, or the molecules within? It makes a difference with the simulation, how [?] simulation is. In the worst case, we might need to go down to the level of the molecule. The chance that the brain simulation will run in real time is basically zero. Russ: Why is that? Guest: So, the computational complexity gets so vast. You can think about, like, the weather right now. People know how to build simulations of the weather where you take a lot of detailed information and you predict where we're going to be in the next hour. That works pretty. Right? Predicting weather in the next hour is great. Predicting it in the next day is okay, Predicting it two weeks from now--forget about it. Russ: But we're pretty good at--November through January is going to be colder than--right? Guest: Yeah, you get some broad trends; I can give you some broad trends without doing a detailed simulation of the brain. Like I can tell you if I offer somebody a choice between $1000 and $5, they are going to take the thousand dollars. I don't need to do the brain simulation to get that right. But if I really want to predict your detailed behavioral patterns, then to do that at any length of time beyond a few seconds is probably going to be really difficult. It's going to be very computationally expensive. And if there are minor errors, as there may well be, then you may wind up in totally the wrong place. We also think about the famous butterfly flapping its wings in Cincinnati changes the weather somewhere else. Really, there are[?] effects like that in the brain. It's just not clear that any time soon there is really going to be a way of building it in AI. And then the third objection I have to that whole approach is, we're not trying to build replications of human beings. I love Peter Norvig's line on that. Peter Norvig is Director of Research at Google. He says, 'Look, I already built two of those'--meaning his kids. Russ: Yeah. He's good at that. We know how to do that pretty well. Guest: The real question is how do we build a machine that's actually smarter and doesn't inherit our limitations. Another book I wrote is called Kluge which is about the limitations of the human mind. So, for example, our memories are pretty lousy. Nobody wants to build a machine that has lousy memory. Why would you do that? If all you could do is emulate every detail of the brain without understanding it, that's what you'd wind up with--a computer that's just as bad at remembering where it put its car keys as my brain is. That's not what we want. We really have to understand the brain to simulate it. And that's a pretty hard problem.

22:31 Russ: Given all of that, why are people so obsessed right now--this week, almost, it feels like--with the threat of super AI, or real AI, or whatever you want to call it, the Musk, Hawking, Bostrom worries? We haven't made any progress--much. We're not anywhere close to understanding how the brain actually works. We are not close to creating a machine that can think, that can learn, that can improve itself--which is what everybody's worried about or excited about, depending on their perspective, and we'll talk about that in a minute. But, why do you think there's this sudden uptick, spike in focusing on the potential and threat of it right now? Guest: Well, I don't have a full explanation for why people are worried now. I actually think we should be worried. I don't understand exactly why there was such a shift in the public view. So, I wanted to write about this for The New Yorker a couple of years ago, and my editor thought, 'Don't write this. You have this reputation as this sober scientist who understands where things are. This is going to sound like Science Fiction. It will not be good for your reputation.' And I said, 'Well, I think it's really important and I'd like to write about it anyway.' We had some back and forth, and I was able to write some about it--not as much as I wanted. And now, yeah, everybody is talking about it. I don't know if it's because Bostrom's book is coming out or because people, there's been a bunch of hyping, AI stories make AI seem closer than it is, so it's more salient to people. I'm not actually sure what the explanation is. All that said, here's why I think we should still be worried about it. If you talk to people in the field I think they'll actually agree with me that nothing too exciting is going to happen in the next decade. There will be progress and so forth and we're all looking forward to the progress. But nobody thinks that 10 years from now we're going to have a machine like HAL in 2001. However, nobody really knows downstream how to control the machines. So, the more autonomy that machines have, the more dangerous they are. So, if I have an Angry Birds App on my phone, I'm not hooked up to the Internet, the worst that's going to happen if there's some coding error maybe the phone crashes. Not a big deal. But if I hook up a program to the stock market, it might lose me a couple hundred million dollars very quickly--if I had enough invested in the market, which I don't. But some company did in fact lose a hundred million dollars in a few minutes a couple of years ago, because a program with a bug that is hooked up and empowered can do a lot of harm. I mean, in that case it's only economic harm; and [?] maybe the company went out of business--I forget. But nobody died. But then you raise things another level: If machines can control the trains--which they can--and so forth, then machines that either deliberately or unintentionally or maybe we don't even want to talk about intentions: if they cause damage, can cause real damage. And I think it's a reasonable expectation that machines will be assigned more and more control over things. And they will be able to do more and more sophisticated things over time. And right now, we don't even have a theory about how to regulate that. Now, anybody can build any kind of computer program they want. There's very little regulation. There's some, but very little regulation. It's kind of, in little ways, like the Wild West. And nobody has a theory about what would be better. So, what worries me is that there is at least potential risk. I'm not sure it's as bad as like, Hawking, said. Hawking seemed to think like it's like night follows day: They are going to get smarter than us; they're not going to have any room for us; bye-bye humanity. And I don't think it's as simple as that. The world being machines eventually that are smarter than us, I take that for granted. But they not care about us, that they might not wish to do us harm--you know, computers have gotten smarter and smarter but they haven't shown any interest in our property, for example, our health, or whatever. So far, computers have been indifferent to us. Russ: Well, I think they have no intention other than what we put in them. And I think the parallel worry with the idea that some day we are going to cross this boundary from these idiot savants into a thinking machine is, 'Well, then, if they are thinking they must have intention. They must have consciousness.' I think that's the worry. I just don't know if that's a real--I don't know if that's a legitimate worry. I'm skeptical. I'm not against it; I don't think it's wrong. It's just not obvious. Guest: It's not obvious that consciousness comes with being smarter. First thing that I would say. And the second thing that I would say is, it's not obvious that even if they make a transition to being a lot smarter--whatever that means--that they will care about our concerns then, either. But, at the same time, it's not obvious that they won't. I haven't seen somebody prove that they won't, or shown me a regulation that will guarantee our safety. Russ: Yeah, that's a whole separate issue, when you think about--okay, let's take it seriously: what are we possibly going to do? I can't imagine what we might do to protect "ourselves"--humans, from these machines. Other than unplugging them, which, you know, Bostrom I think over-exaggerates but he suggests it might not be possible to unplug them. They'll just take charge of our brains and fool us and manipulate us and--the next thing you know, we're gone. I don't find that plausible. It's interesting. Maybe we should worry about it. But given that we can't imagine what the skillset of things are going to be, it's hard to know what we might do to prevent it from happening. Guest: I mean, at some level I agree with you. I think there's a difference between you and I can't imagine sitting here on this phone call, and maybe having society invest a little bit of money in academic programs to think about these things, and so forth. And maybe with enough intense interest we might come up with something. I'll give you an example. There is a field in AI, let's say in computer science, called program verification, in which you try to make sure a program actually does what it's supposed to do. Which most people most of the time don't do. Most of the time, they release something; there are bugs; they fix the bugs. And in some domains that's okay. In a car, it's not really okay. And in, you know, the stronger, more powerful a machine gets, the less okay it is to just say, 'Oh, we'll try that; we'll see if there are bugs and we'll fix them.' You would like actually a science of how you assure yourself that the machine is going to do what you want it to do. And there is such a field. It's not, I think, up to the job so far. But you could think about, how do you grow a field like that so that it might help us? So, there are academic avenues you can consider. And there are legal avenues, too. Do we need to think how people think more about what the penalties are? How serious a crime is it? Most people think that software violations, unless they are like embezzlement, they are not that serious. But maybe there should be some class of software violations that should be treated with much more severe penalties. Russ: Well, an air traffic control system that went awry, or ran amok, would be horrifying. Obviously, the driverless car that swerves off the road into a crowd. These are obviously bad things. Right now we have something of a legal system to deal with it; but you are right: it would have to probably be fashioned somewhat differently. But when you talk about that kind of regulation, it reminds me a little bit of the FDA (Food and Drug Administration). Right? The FDA is designed to try to make sure that the human-created intelligence in pharmaceuticals is "safe." I don't think it's been a very good--I think it's been a very bad way to do that. I'm not sure we want to go down that road for computer programs. Obviously we need a computer program that would measure whether they are safe or not. And of course, that's impossible. In my opinion. Because there's no such thing as 'safe'--it inevitably involves judgment. Guest: Yeah. I mean, I think there are steps one could take; but I don't think they add up to something that makes me feel totally confident. So, that's why I still worry, even though I don't think the problem is an immediate one. I guess the other thing that Bostrom and others talked about is, the problem could come more quickly than we think. I mean, I wouldn't want the whole species to bet on my particular pessimism about the field. I mean, I could be wrong. Russ: That's a good point. Guest: There could be a lot of arguments for why I think, you know, next decade not that much is going to happen. But maybe someone will come up with some clever idea that nobody really considered before, and it will come quickly. Russ: And all of our appliances will conspire while we are asleep to take over the house. Right? That's the worry, right? And we won't even know about it. They'll have extracted our organs and sold them on markets before we can even wake up. Guest: Well, you make it sound ridiculous. But-- Russ: I'm trying. Guest: But 20 years from now, the Internet of Things will be pervasive. People will be habituated to it, just like they are habituated to the complete lack of privacy that they have on Facebook. And they'll be used to the fact that all of their devices are on the web. And, I think, like create--what do they call them--I'll call it 'black malware' on the web--ransomware is the word. Where they create something that says 'I'm going to erase your hard drive unless you send me some Paypal money. Now multiply that by the Internet of Things. Russ: Yeah. I'd say that's more worrisome than--I'd say, as some listeners have pointed out in response to the Bostrom episode, that's a little more frightening than HAL run amok. Guest: I think in the short to medium term, it is.

32:22 Russ: Let's go back to some of the technical side of things. And you speculated about this in a recent talk you gave; and we'll post that on the episode's web page. Why haven't we made more progress? As you say, we've made a lot of progress in certain areas. Why have some of the optimists been disappointed? Where do you think AI has gone wrong? Guest: Well, I think in the early days people simply didn't realize how hard the problem was. I mean, people really thought that you could solve vision[?] in the summer. There was a grant for it; there was a proposal; they said, this is what we are going to do. And people just didn't understand the complexity, I think. First and foremost, the way in which top-down knowledge about how the world works interfaces with bottom-up knowledge about, like, what the pixels look like--I've used pixels in a row; is there a line there in this diagram? And we're pretty good now, 50 years later, at the bottom-up stuff: do these patterns of dots look like a number '6' or a number '7'? We've trained a lot of examples; we can get a machine to do that automatically. But the top down stuff we really need to understand the world, nobody's got a solution yet. I think it's partly because you need to do a lot of hard work to get that right. It's possible to build relatively simple algorithms that do the bottom up stuff. And right now the commercial field of AI is dominated by approaches like that, where you use Big Data and you get things like that kind of part right. So, nobody cares if your recommendation is kind of 70% correct. So, if I told you you'd like a book by Gary Marcus, and you don't, well, it's not the end of the world. But there are domains where you need to get things right. Driving is one of them; maybe you can do that by brute force and maybe you can't. Google hasn't quite proven yet that you can. If you wanted a robot in your home then the standard needs to be very high. It's not enough to be sort of 70% correct using a statistical technique. So, the 70% correct statistical technique gives you the translation that gives you the gist. Nobody would use Google Translate on a legal contract, though, because gist wouldn't be good enough. And similarly, you wouldn't want a robot that is right most of the time. Right? Because if it's wrong a little bit, it puts your cat in your dishwasher, and it's bad. And so-- Russ: Steers you down a one-way street in the wrong way. Guest: [?] there is a higher standard for what is required, but nobody knows how to do that yet. So, people are kind of focusing on where the street lights are. The street lights are how to make money off Big Data. And that's kind of where the field is focused right now. And understandably so. There's money to be made. But that's not getting us to the deeper level there. Russ: And in your talk, I think you mentioned a very perceptive point about what Big Data is really about is, this thing is related to this other thing. And that's not what we really want. Guest: I mean, it's mostly right, doing statistical analysis, correlational analysis. And correlation can only get you so far. And usually correlations are out there in the world because they are causal principles that make them true. But if you only pick up on the correlation rather than the causal principle, then you are wrong in the cases where maybe there is another principle that applies or something like that. And so, statistical correlations are good guides, but they are not great guides. And yet that's kind of where more of the work is right now. Russ: Well, that's where we're at in economics. That's where we're at in epidemiology. That's where we're at to some extent with analyzing climate. These are all complex systems where we don't fully understand how things fully connect, and we hope that the things we've measured are enough. And I think they often aren't. So, I'm more of a pessimist about the potential of Big Data. Guest: I had that piece in The New York Times called "Eight (No, Nine!) Problems With Big Data," and expressed exactly that view. The graphic that you're talking about actually came from something that the Times's freelance artist did for that Op-Ed. And we went through all the kinds of problems that you get with Big Data--maybe you can put that one in the show notes. Ultimately they are variations on the theme of correlation and causation. And there are some sort of more sophisticated cases. But if that's all you are relying on is the Big Data and you don't have a deeper conceptual understanding of the problem, things can go wrong at any minute. Like a famous example now is Google Flu Trends, which worked very well for a while. Russ: Google what? Guest: Flu Trends. Like, do you have the flu? Russ: Okay. Guest: And what it did was it looked at the search trends people were doing. And for a while, they were pretty well correlated. More searches for these words meant more people had the flu. And then it stopped working. And nobody really quite knew why. And because there was just some correlational data, it was a guide[?] but it was a very soluble guide[?]. There were all these papers written when it first came out about how they were much better than the CDC (Centers for Disease Control); it was much faster than the data that the CDC was collecting, and so forth. And it is faster. It's immediate. But that doesn't make it right. Russ: Yeah. It's interesting.

37:45 Russ: What's the up side? Let's not be so worried for the moment about, say, my coffee maker, which I can program, taking up my internal organs while I'm sleeping. Let's talk about something a little cheerier. I've been surprised--maybe I don't read enough, obviously--but when they talk about the potential for AI, they use words like 'energy', 'medicine', and 'science'. And I'm curious--which are all things we all care about; they are really important. I'd like to go the doctor; people are using AI to interpret x-rays; that's a good thing. Sometimes, and maybe a lot of the time--I was talking to Daphne Koller about maybe they are better than humans. Great. That's an improvement. What we really want, though, is a cure for cancer. I think, ideally. Are those things--we want "free energy," we want a battery that lasts more than a day--these are the things that are going to change the texture and quality of life. Are they in reach if we made enough progress? Guest: I think so. I mean, we were talking a minute ago I guess about epidemiology and things like that. I think that a lot of biological problems--we'll start with biology--are very, very complex in a way that an individual human brain probably can't fathom. So, think about the number of molecules. There are hundreds of thousands of different molecules in the body. And the interactions of them matter. You can think of it like a play with a hundred thousand different actors. Right? Your brain just can't handle that. People write plays--who was the guy?--Robert Altman would make movies with like 30 characters, and your brain would hurt trying to follow them. Well, biology is hundreds of thousands of characters. And really, it's like hundreds of thousands of tribes. Because each of those molecules is many, many copies, slightly different from one another. It might be that no human brain can ever really grok that, can never really interpret all those [?]. And a really smart machine might be able to. Right now, machines aren't that smart. They can keep track of all those characters but they don't really understand the relations between them. But imagine a machine that really understood, say, how a container works. How a blood vessel works. How molecules are transported through. Really had the conceptual apparatus that a good scientist has; but the computational apparatus that the machine has. Well, that could be pretty exciting. That could really fundamentally change medicine. So, and that's part of why I keep doing this, despite the worries: I do think on balance that probably it's going to be good for us rather than bad. I think it's like a lot of other technologies: there's some risks and there's some rewards. I think the rewards are in these big scientific problems and big engineering problems that individual brains can't quite handle. Russ: That's a little bit mesmerizing and fascinating. I should tell our followers--I wrote a followup to the Bostrom episode that is up at econtalk.org; you are welcome to go check it out. There were some interesting moments in that conversation. But one of the things I raise in that followup is related to your point about lots of molecules being analogous to lots of characters in a play. Which is--one analogy I think about is history. So, we don't have a theory of history. We don't pretend to understand the "real" cause of WWI or the American Civil War. We understand it's a messy, unscientific enterprise. People have different stories to tell; they have evidence for their stories. But we don't pretend that we're going to ever discover the real source of the Second World War, the First World War, the Civil War. Or why one side won rather than the other. We have speculation. But the real problem is what you just said--there's a hundred thousand players. Sometimes it's just 10--Kaiser Wilhelm and Lloyd George and Clemenceau and the Czar, and Woodrow Wilson--and that's already too hard for us. We can't--we don't have enough data; we don't have enough evidence; there's too much going on. And again, I think of economics being like that. There are many people who disagree. But I think these are in many ways, possibly, fundamentally insoluble. Is that possible? Guest: Well, I think there's a difference between like predicting everything that's going to happen in this particular organism from this moment going forward and understanding the role of this molecule such that I can build something that interacts with it. And realizing that if I do, that things might change. So, I don't know that the entire problem is graspable, but I don't think that that rules out that if you better understand the nature of some of those interactions that you won't be able to intervene. Russ: No, I agree. And obviously we've made--medicine's a beautiful example of how little we know and yet we've made extraordinary progress, maybe not as extraordinary as we'd like, in helping people deal with things that we call pathologies--things that are disease, etc. And I think we have a lot of potential there, for customized pharmaceuticals, to your own particular metabolism and body, etc. I think that's coming; I think we'll make progress there. Guest: Well, I think AI will be really important in making that progress, actually. If you think about how much data is in your genome, it's too much for you to actually sort out by yourself. But you might, for example, be able to run silico[?] simulations in order to get a sense of whether this drug is likely to work with your particular genome. And probably that's just too hard a computation for one doctor to do. So we thank God, machines help with it. Russ: Absolutely. Yeah. And they'll figure out the dose, [?] whether it will work or not; they'll tailor the dose, which is remarkably blunt at current levels of medical understanding. Guest: They'll find a cocktail for you. Russ: Sure. Because interactions are too hard for us. In theory, I guess simulation could take us a long way there. Guest: I would add that on the point about simulation that intelligence simulation, let's call it, is a lot better than blind simulation. Like, if you really have to go down to the level of the individual molecule, you get back into that problem I was talking about before, computational complexity. You really want the simulations to have some understanding of the [?] principles that are there in order to do it efficiently.

44:25 Russ: Let's talk about how humans--let's move away from this machine that understands everything including what I need next, not just, not only knows what drug to give me; it knows that I shouldn't go skiing tomorrow because I'm not going to really like it so much. That's sort of, to me, this unrealistic but maybe possible future that machines, our interactions with machines. What about the possibility of humans just being augmented by technology? We think about the wearables, and I assume, people are already doing it, of course, implantables. What's the potential for machines to be tied to my brain in ways they aren't now? Now I'm just listening or looking at them. But maybe more directly. Is that going to happen? Guest: Well, something else you can add to your show notes is a piece I wrote on brain implants for The Wall Street Journal with Christof Koch. And we talked about these kinds of things and we went through some of the limitations. So, for example, right now a problem with brain implants is the risk of infection. So, we put something in, but we've got to clean the dressing every week or you might have an infection that will kill you. It's a pretty serious restriction. I would love to have Google onboard, directly interfaced with my brain, giving me all the information I need as I need it. But I don't really want to pay the risk of infection and death. And so there are some technical problems like that that need to be solved. And probably will. There's some energy and power problems that need to be solved. There's some interface problems. So, we know enough about how the motor cortex works to make it so that roughly you can move a robot arm with your thoughts. You can't move it that well; it's sort of inefficient. It's like one of those things you see in a little carnival where you've got a little gear driving this thing--it's not a very direct connection. But we know something about it. We don't know anything about how to interface ideas to machines. So, the software and the pulling things out of your memory is not that hard: Google solves that, and Spotlight and Apple solve that, and so forth. We have technology for things like that. But the problem of interpreting your brain state so that we know what search query to run, that's pretty hard. It's so hard that we've made no progress on it so far. We will eventually. There's no reason to think that there's no coding there to be understood. It's a matter of cracking codes. The code might be different for different individuals; you might have to do a lot of calibration. But there are probably some general laws that could help us get started. And some day we'll figure out those laws. But we haven't yet.

47:09 Russ: Let's talk about the economic effects and talk about employment. Of course, it's a big issue right now. This is a little more plausible to me: it's not so much that AI is going to know how to interview really interesting, smart people so I won't be able to do EconTalk any more. There are plenty of technological advancements that we've seen in the last 25 years that have made people unemployable or certain skills unusable in the workforce. What do you think is coming there in the shorter run, before we get to this superintelligence? What are some of the things that are going to make it challenging for certain skills to be employable? Guest: The first major skillset that's going to diminish in value, pretty rapidly, is driving. In the next 2 decades, most taxi drivers will lose their job, delivery truck drivers, bus drivers. Most of that will go away. And it'll certainly go away in 3 decades, and probably in 2. Some of the problems are still on the software side, but I think they're mostly solvable. There are some liability issues, and people getting used to the idea. But eventually machines will drive better than people. And they'll do it cheaper and they'll be able to do it 24 hours a day, and so the trucking companies will want to do it, taxi companies will want to do it-- Russ: You'll be safer, in theory. It's a glorious thing, in theory: use less energy, it'll be more efficient. Guest: Eventually all that will come to pass. And there the 'eventually' really is like a 20, 30 year horizon. It's not 100 years. There's no reason that it will take that long. And so that's a pretty radical shift to society. There are lots of people that make their living driving. And it's not clear what those people will do. The common story I hear is, well, we'll all get micropayments; Google will pay for our information, there's a [?] story; or we'll all make tons of money on Youtube and Etsy and so forth. And I don't buy that. I think that there's a little bit of money--well, actually, there's a lot of money to be made for a small number of people. You look at Youtube videos; the top 100 people make a real career out of it. But most people don't. And that's going to be true in each of these domains, so you might get a few hundred thousand people, if you are really lucky across a whole lot of different creative enterprises making some money; and then you are going to have several hundred thousand people that really don't have an alternative career. The end--the problem's going to get worse, because the problem is going to happen in the service industry. So, you already, some places, can order your pizza by touching a touchpad; you don't need a waiter there any more. There's someone who has a burger assembly plant that's completely automated; and I'm sure McDonald's is investing in that kind of thing. There's going to be fewer people working in fast food. There's going to be a whole lot of industries, one by one, that disappear. What I think the endgame is here, and I don't know how in America we are going to get there, is in fact a guaranteed minimum income from the state. The state is going to have to tax more heavily the people that own all of these technologies--I think that that's clear. And there's going to have to be a separation in people's lives between how they find meaning and how they work. So, you and I grew up in an era in which meaning, especially for men but also for many women comes from work. I mean, not solely from that--it comes from parenting and so forth. But that's going to change. It's going to have to change, because for most people that's not going to be an option any more. People are going to have to make meaning in a different way. Russ: Yeah, it's interesting. I think a lot of the deepest questions around these technological changes are political and cultural. So you said those driverless cars are coming in 20 or 30 years, driverless vehicles. Guest: Could be 10. Russ: No, I think it could be 10, too. I think we'll have the technology. The question is whether we'll have the political will to fight it and to make it happen. So, right now, just to take a trivial example, Uber, which is, to me the forerunner of the driverless car--because I think that's the way you'll be picked up; you'll be picked up by a drone, whether it's in the air or on the ground, that's going to drive you where you ask it to go. And it'll figure out through a network system how not to run into other things. But Uber's having a lot--everyone who uses it, almost everyone, thinks it's the greatest thing since sliced bread. And yet there are many cities where you are not allowed to use it, because it hurts the cab drivers who have paid a lot for their medallions; or people are alarmed by it. They find it somehow unattractive that they can charge certain prices at certain times, that they don't do x, y, or z. So, one question is the political will. The cultural will is another area where you're point is a fantastic point, about meaning. Because to me, I think that's what matters. I think people--the pie is going to be really big, and dividing it up is going to be not as hard as it might be, as you might think. But the challenge is: how much fun is it going to be to watch Youtube all day? I mean, people do seem to be drawn to it. I, myself, have trouble sometimes pulling myself away from entertaining videos. But that's a strange life, compared to, as you say, the way we grew up. Guest: I personally never watch Youtube. But I will admit I spend a lot of time on my iPad, merely doing other things. I think that to some extent the pain will be eased for some people because a lot of [?] available-- Russ: Say that again--a lot of what? Guest: The pain will be eased. So, the Oculus Rift and its competitors--a lot of people are going to enjoy immersing themselves in virtual worlds. So, it might be that this a sort of eat cake, a kind of software driven cake that nobody imagined before. And it might be that some people don't find that meaningful. Some people might do physical things, go back to the land. I think different people respond differently. I do have to say that the Web and eye-devices and all those kinds of things really do suck up a lot of people's time; and I think that's part of what will happen. That will be the more true [?]. Russ: Yeah, I see it as a possible--obviously there will be cultural change as to what's acceptable and what's considered honorable and what's considered praiseworthy. My parents, and to some extent me, we frown on people who sit on the Internet all day, to some extent. But part of that is happening with us, too. So, we're not--but our children, they think it's normal. They don't think anything is remarkable about it at all, to inhabit a virtual world for long periods of time. And I presume it will become even more normal. So, some of these worries I think won't be worries. But as you point out--we have a lot of hardwired things in us that are not easily changed by culture, perhaps. I think about just how physically fit so many people are, physically active, in a world where being physically active is really not as valuable as it used to be, and maybe isn't even so healthy. People tout its healthiness; it makes you live longer. But a lot of it I think is just a desire for real stuff. Nassim Taleb points out how weird it is that when you check into the hotel you see the person's bags being carried by an employee of the hotel, and then half an hour later that same person is in the gym lifting heavy things. And lifting his own bags. We're a complicated species. Guest: We are a complicated species. I think what's interesting about the iPad, for example, is how well it taps into our innate psychology. So I think we do have an evolved psychology; it's a malleable one, malleable through culture. But people have figured out how to build toys that didn't exist before that really drive--first it was television, now it's the iPhone--toys that--the iPod in between. These toys really do tap into needs that have existed for hundreds of thousands of years.