0:33 Intro. [Recording date: November 19, 2019.] Russ Roberts: Today is November 19th, 2019 and my guest is computer scientist and author Melanie Mitchell. She is Professor of Computer Science at Portland State University and External Professor and Co-chair of the Science Board at the Santa Fe Institute. Her latest book and the subject of today's episode is Artificial Intelligence: A Guide for Thinking Humans. Melanie, welcome to EconTalk. Melanie Mitchell: Thanks for having me.

0:56 Russ Roberts: So, this is a really superb overview of the history of artificial intelligence [AI] which doesn't take up too much of the book, but it is in there, which is very nice. More importantly, it's an overview of the current level of the capabilities of AI. It teaches the reader how artificial intelligence is actually used in many of its applications today, and along the way we learn about your assessment of where you think AI is going and how that might affect our lives. So, it's really--it's a wonderful book. I want to start off with a lecture that you referred to from Douglas Hofstadter when he was at Google and he, at that point--when was that lecture roughly? Melanie Mitchell: I think it was around 2013 or so. Russ Roberts: Okay. So, he was worried about the progress that AI had made in chess and in music, two areas that he had underestimated, he confessed, when he had written his very influential book, Gödel, Escher, Bach. And, he was terrified. He said that AI will make humans obsolete; we'll become relics, our children will be relics. And the part that was interesting about the story was two things. One, that Hofstadter felt that way; and second that the engineers at Google that he was talking to were puzzled. So, talk about those two reactions and what you make of them. Melanie Mitchell: Yeah. So, the engineers--the meeting was really featuring Doug Hofstadter. They were coming to see him and hear what he had to say about AI. A lot of the engineers at Google went into the field because they had read his book when they were in high school, like many of us. That was and extremely influential book in AI. He was really a hero of many people. But, he got up and started talking about his fears about AI and his terror, not that we would have some malevolent, super-intelligence running the world and enslaving us, but more that intelligence itself would not be as profound as he thought it was. He was worried that intelligence, AI might be achieved in computers via cheap tricks. And he, as you said, was very disturbed by how far AI had come starting maybe with IBM's [International Business Machines] Deep Blue system which beat Garry Kasparov at chess, and then progressing through Watson playing Jeopardy, and self-driving cars and speech recognition, all of that, everything. And it terrified him because AI was doing so well at these tasks. But the Google engineers, they got into AI because they were inspired by Hofstadter. They loved his books. And here he was saying AI is terrifying and that was exactly what they were trying to achieve. So, they didn't really understand at all. Russ Roberts: And it's a very deep question that I'm sure we'll maybe dance around and maybe go delve into it, which is: How should we feel about that? Would it be a good thing or a bad thing if computers could write music that was better than Mozart, better than Beethoven, if it could write poetry that made us cry and create movies that excited us and inspired us right? Like you said, that's what these engineers are trying to do, that's their job; and yet somehow--maybe it's because of age or a different temperament or sometimes it's a religious outlook--there's something disturbing to some people about that. Melanie Mitchell: Yeah, absolutely. I mean, what we value most about ourselves as humans, sort of what makes us special is our intelligence, our creativity, our ability to create music and literature and so on. So, I have mixed feelings about this. For one thing, I'm like those Google engineers. I got into AI because I was excited about the ideas in Gödel, Escher, Bach. I read it in college or just after college and thought 'I want to understand what intelligence is. That's the most fascinating question of all.' So, I actually went to work with Doug Hofstadter. I was a Ph.D. student in his group. Back then he wasn't afraid of AI because AI wasn't doing very well. It wasn't threatening. Although we rejoiced in both that the programs that we built--we would rejoice in both their creativity and their dumbness, because their dumbness really showed how challenging the problem was. So, I guess one of the reasons I wanted to write this book was to make sense of what was going on, because here was Doug Hofstadter, my former mentor saying how terrified he was: that was very surprising to me. Here were the Google engineers saying, 'I think we're going to have human-level intelligence within the next 30 years or so.' And me thinking, 'What? How could that possibly be true?' So, I started looking into AI more broadly. I've been doing research in this field for decades but I'm in my own little silo of research, narrow research. And, I started looking more broadly and trying to figure out exactly what was going on in the field. So, that was really the impetus for writing this book.

6:24 Russ Roberts: Yeah. There's a sense at the beginning of the book, I really related to which was you confess that maybe you were too sanguine, too optimistic about the prospects of humanity in the future and you decided you'd look into it. And a lot of what I do on the program is interview people, and I've talked to Nick Bostrom, Rodney Brooks, Gary Marcus, Pedro Domingos, people who have differing views about where this future might go, and I'm trying to figure it out myself and you're going to help me today and your book helped me, and you'll help our listeners. At one extreme, let's put on the table the idea of the singularity which we've mentioned before, associated with Ray Kurzweil. So, what is the--depending on your perspective it's an extreme view. For some, it's a beautiful thing; others, it's a dark thing. But, what is it? Melanie Mitchell: The idea of the singularity is that, once AI reaches human-level intelligence, that because it's a computer--and computers are extremely fast and they can process data, huge amounts of data much faster than humans can and all that--that the AI will get smarter than humans very quickly. They'll be able to digest all of human knowledge and will be able to create even better AIs or maybe improve itself. I'm not exactly sure which. But, in this cascading effect where it gets smarter and smarter and smarter, and as Kurzweil predicts, he predicts that by 2045, we'll have intelligences that are a billion times smarter than humans. Russ Roberts: Only a billion. He's so cautious. Melanie Mitchell: Right. I think most people who are actually serious AI researchers roll their eyes when they hear about that kind of thing and they say you know, 'You know, first of all, Kurzweil's reasoning all has to do with his idea of exponential growth--that we have Moore's Law which says that computers are getting exponentially smaller and exponentially more powerful. But for one thing, software does not show an exponential trend in any way that you want; and software is where AI is sort of at right now.' And also it's not clear that we do have exponential trends anymore in these hardware areas. And when he says 'a billion times smarter than humans,' he's implying that there's some intelligence metric that can be multiplied by a billion. And, intelligence to me is not just a single thing. Russ Roberts: It's not a scalar, a one-dimensional digit. You do have to give him credit: he at least didn't use decimal points. He didn't say, 'It'll be 1,000,372,643.6 times bigger than human intelligence,' which would be really offensive. Melanie Mitchell: Yeah. Well, I'm sure he's not done forecasting yet. So, I think most people look who know anything about AI look at the current state of AI and say that this idea of the singularity is nonsense, to put it bluntly. But, there are people who believe it, or there's people who believe in slightly less inflammatory versions that were really getting closer and closer to human-level AI--whatever we mean by that--and that will be there within the next 20 to 30 years. The people that I talk to in AI don't believe that. But, I know that the field is kind of split. And as you said, you mentioned Bostrom, you mentioned Gary Marcus, you mentioned Rod Brooks, Pedro Domingos--and it's strange when you talk to all these different people, they have vastly different views. Not only of where the field is going but actually where the field is right now.

10:48 Russ Roberts: And we're going to talk about that. Why don't we start with that--as a background observation, though, that your book doesn't literally delve into this, but it made me think about it, it's I think a crucial point. I think there's a fundamental misunderstanding of what knowledge is. I'm going to put two types of knowledge on the table and see what you think of this distinction. So, if I ask Siri, which I have to be careful because-- Melanie Mitchell: Don't mention that name-- Russ Roberts: I'm in the airplane mode on my phone right now, but I've noticed sometimes even in airplane mode, she's responsive. I don't want to interrupt the flow, here. But, if I ask her 'How tall is the Eiffel Tower?' or 'What's the capital of Brazil?' or 'How many home runs did Stan Musial hit in 1962,' she's fantastic at that. And it's instant. And she's probably almost never wrong. I don't know what she does about some facts that are slightly ambiguous; but most facts are just facts. If you ask her--and I haven't asked her this, but if I did--if I asked her 'Does the minimum wage reduce employment?' she would not answer that question. She would pull up a bunch of websites and say, 'Here are some things I found.' Because it's not a yes-or-no question. It's not something you can have--you can have an understanding of it, but you can't have knowledge of the kind like the height of the Eiffel Tower. And I think--just one crude way to make the distinction is data versus wisdom. The idea that a smart machine could cure poverty, that the problem with our attempts to cure poverty is we're not smart enough, I think is a fundamental misunderstanding of what the nature of poverty is. It is not something that is amenable to intelligence. It requires something much more complicated. It's what Robin, I think it's Hogarth, I learned from David Epstein in his book Range and in our interview: It's a wicked problem. It's not a kind problem. It's got too much complexity around it to be--there's too many trade-offs. There's too much uncertainty. So, I just think that, when people talk about the singularity--that machines will cure all disease and cure poverty and we'll live forever and that be incredibly happy because they'll know what happiness is too. It's as if that somehow a knowledge problem. I think it's just a fundamental misunderstanding. What do you think of that? Melanie Mitchell: Well, I agree with you. I mean, you're talking about problems that even humans can't solve. So, that's even one step ahead of what I'm thinking about because I'm thinking about problems that humans can solve, but machines--because they have knowledge, let's say--but machines can't solve because data is not knowledge. So, one example is, if I'm driving and I see something ahead of me in the road, how do I decide whether I should stop for it or not? Let's say it's a floating paper bag, or a herd of ducks--a flock of ducks, I guess--or a cardboard box, or a child's Lego set, or, you know, whatever it is. I have knowledge about those things. I know how they interact in the world. I know what would happen if my car crashed into them. I have the ability to predict the future, the likely future, just of these very mundane kinds of things. I'm not even talking about poverty, happiness, etc. But one of the problems with AI is that it doesn't have that broad knowledge of the world. And I learned in writing this book, about self-driving cars that one of the problems they have is: What should they stop for? And the biggest source of accidents in self-driving cars, the experimental ones that are driving around, are people rear-ending them. And, the reason for that is that they stop unexpectedly. They slam on the brakes, because they think there's something there that they should stop for that no human would stop for. So they're unpredictable. So, of course the human is at fault. You know--you are not supposed to follow that close . But people do. People expect cars to drive in a certain way. And, these self-driving cars don't have enough common sense, if you will. They don't have enough in the sense we're talking about to know what to do in these kinds of situations that are different from, say, what they've been explicitly trained on.

15:45 Russ Roberts: You have an example--you have a couple of examples in the book we'll go into. One of them, I love: it's a photograph. It's a soldier returning. I don't know what it is exactly, right? That's what's beautiful about it: it's just a photograph. Next, the soldier is down on one knee. She's got her hair tied back, so at first you might not notice that it's a woman, but it is a woman. She's in camo gear, but when I showed it to my wife it was a little dark in the room and she thought it was a dress. But it's camouflage gear--so she's clearly, she has a big military backpack on. She's clearly a soldier. She's stooping down to pet a dog. She has a lot of emotion on her face--to my eye, but it's a little hard to read that emotion. It's not--there aren't tears. It's not obvious. She's in profile. It's a little hard to see it, but I immediately see emotion, whether it's there or not. The dog, you can see its tail is blurry. So the dog is wagging its tail, and next to the two of them is a balloon that says 'Welcome home.' So, as you point out we immediately see that, 'Oh, soldier coming back from war or back from duty seeing her dog.' And, it's a--we make an immediate emotional connection. How does the computer see it in today's level of AI? Melanie Mitchell: Well, if the computer has been trained to recognize objects, it probably will recognize a person. You might say it's a man. It appears it's not that great about gender. It will recognize a dog. It might recognize a balloon. I'm not sure. But it won't be able to put pieces together. It certainly is not anywhere near where humans are at recognizing, like, emotions or the dog wagging its tail, kind of. It won't be able to put together the kind of story we put together when we're looking at visual data or hearing about something from a written story, because it doesn't have that world knowledge or that wisdom, if you will, about how the world works. Russ Roberts: So, what I learned for your book--and we'll try to get into it because it's a little hard to do in a podcast without visuals--but the way that a computer could learn, and I'm going to put "learn" in quotes and we'll come back to that, too--but the way a computer could learn about that is it would look at a lot of photographs of faces in that similar setting. Look for things around the shape of the mouth of the person returning, maybe the eyebrows, maybe tears or other things. And then associate that with photographs that humans have labeled as sad, longing. We could imagine a bunch of adjectives. So, that eventually it could "learn" that the right way to capture that photo. Which is currently, at the current level, is more like a man and a dog or maybe a soldier or a dog, or maybe a woman soldier and dog, to something more human, as the description which would be: Soldier returns home and re-encounters something that she loves and misses. But the way it would learn about that isn't by reading Jane Eyre or a better example, I guess would be the Odyssey by Homer where Odysseus encounters his dog after, whatever it is, 20 years. It would be through a very mechanical process of association. Melanie Mitchell: Yes, that's right. That would--using today's technology it would have to have maybe millions of photos, faces with different emotions. And they would all have to be labeled by humans as to what their emotions are. And then the computer would look at the pixels of the image. And, right now the most common approach is using these so-called deep neural networks, which learn from these labeled images, and then the input is an image, the output is some kind of classification out of some fixed number like sad, happy, longing. You could decide what your categories are. And as you say, it's very mechanical. But I think that brings up another question which is: What are we humans doing to learn that, learn what we see? Are we not mechanical in some sense? What else is there besides our neurons, our neurons firing, our memories? I think in some sense we are mechanical. And I've actually had a lot of arguments with people, including my own mom about this--who doesn't buy that. But that we're just--it's a matter of complexity. You know? It's that we are so much more complex and evolved, if you will--evolved to have certain kinds of emotions or faces be very salient to us, because that's so important in our lives, and this sort of social, sociality of humans: that there are certain things that we are in some sense evolved to learn. Russ Roberts: I think you quote, is it Mitch Kapor? I don't know how to pronounce his last name. Melanie Mitchell: Yeah. Russ Roberts: Mitch Kapor says, basically artificial intelligence will never be, quote, "intelligent," until it goes through the life experiences that a human brain experiences and categorizes those. And so it is possible that what we're really doing when we look at that photograph is exactly what you said: It's a mechanical process, some neurons fire, I remember the last time I saw someone who looks something like this. We don't understand that process very well yet. We might get better at it. And the interesting question to me is whether that's going to help make AI better or just tell us something about our brains. But, why don't you respond to that actually? Melanie Mitchell: Yeah, it's interesting because going back to Ray Kurzweil and his singularity, if you actually read his books carefully, you see that he actually agrees with that statement. He says, 'Yes, you do have to be able to experience all these things.' But his solution is that within 20 years, we're going to have virtual reality that's indistinguishable from real reality and that's going to be used to train AI, so the AIs will actually go through a process of development in the way that we do, but perhaps using virtual reality to speed it up.

22:41 Russ Roberts: Let's go back and talk about Deep Neural Networks, because I hear that phrase a lot. It doesn't mean what it sounds like, as it turns out. It's a very clever marketing phrase, though, because 'neural' makes it sound like my brain and 'deep' makes it sound profound. So, what is it literally? Melanie Mitchell: So, a neural network is a computer program that's inspired by the brain. Particularly, most neural networks these days are inspired by the way the visual system works where the visual system gets input--you know, light falls on your retina and then is processed in the brain through a series of layers of neurons. So the visual system is layered in a hierarchical way. A 'deep neural network' is a simulated, simplified version of that with these layers and 'deep' refers to how many layers there are. So, a shallow network has a small number of layers; a deep neural network has multiple layers. And that's all that 'deep' means--is sort of how many layers of simulated neurons there are in the network. So, the reason why 'deep'--I mean, deep neural networks, the idea has been around since the 1960s or 1970s, and people are experimenting with these things for a long time. But, people never had enough data to train them, and they never had enough compute power to make that training possible. So, in the last decade, we have both huge amounts of data because of the World Wide Web and so on, and very fast parallel computers. So, it's come together to allow these networks to actually start to shine in certain tasks--in vision, in speech, in language processing--because of this convergence of big data and fast computer, computing power. Russ Roberts: So, using that, and let's talk about the examples like recognizing handwriting or recognizing objects and identifying them correctly in a broad sense, like dog versus cat. One of the ways that happened was people had, as you say, access to lots of photographs all of a sudden through Flickr or Google photos or other databases of photos. But then we had to get them labeled so that the neural network could practice, learn when it made a mistake, and go back and re-weight--basically, fundamentally what the so-called learning that goes on is it's re-weighting the-- Melanie Mitchell: The connections between the simulated neurons. Russ Roberts: That were given a certain shape of darkness of a pixel. It decides that that was more likely to make it a dog rather than a cat in this particular region, right? And so, that required a lot of those millions of photographs to be labeled. And that was done through--a lot of that was done through Amazon's Mechanical Turk which is something I'd heard of, I didn't know about. So, explain what that is and how that played a role, because it's really an amazing thing, bizarro thing. Melanie Mitchell: Yeah. So, Amazon created this web-based platform where people who had some job they needed to be done that couldn't be easily automated were able to hire people online to do these jobs. Like, 'Here's a photograph. Tell me is it a dog or a cat? I'll give you a penny for that.' Russ Roberts: And we're really good at that. Melanie Mitchell: We're really good at that. Russ Roberts: Yeah, no problem. Melanie Mitchell: So, they called it Mechanical Turk, and this is a little bit obscure, but back many hundred years ago there was an AI hoax where somebody had built this chess-playing machine that had a puppet that would move the pieces and the puppet was dressed as a Turkish-Ottoman, I don't know what but some kind of Turk. And so Amazon--and it was actually a person inside, hiding inside. So it was a hoax. So, this is with Amazon, some genius at Amazon came up with this analogy which is, 'Okay, we have things, people that are doing these tasks that are too hard for AI and we're paying--anybody, can hire them. You can pay small amounts of money for simple tasks,' and they call it artificial intelligence because it's humans, right? And this platform has grown. It's huge now. And in fact researchers use it all the time for getting people to label data, getting people to be in psychology experiments or social science experiments--all kinds of different tasks. So, you know, this idea that AI is going to put people out of a job is actually a little more complicated because AI now, or the lack of thereof, has created this huge set of very low-paying jobs for people who are on this platform. Russ Roberts: What kind of money would a person make on this? You say it's penny a label. Melanie Mitchell: Or 10 cents. You know, I don't know exactly. Russ Roberts: Right. So, you don't know. So, anyway, they make some kind of money. And some people find this offensive because they don't make very much, but for some people it's a nice way to do something relatively mindless that brings in a little more money. For some, this is an indictment of the world we live in and for others it's like, 'Wow, this is cool.' We're going to leave that to the side. My question is: How do we know they're labeling them correctly? We can't use the computer to check them because that's the whole idea. Melanie Mitchell: Right. So, this is a problem. Russ Roberts: Honor system? Melanie Mitchell: No. The honor system doesn't work. I think most people don't maliciously mislabel them, but sometimes they get lazy or they make mistakes because they're trying to do too many too fast. So, there's a lot of methods people look at having, like, an image being labeled by multiple people and taking a majority vote of the category. There's different methods for trying to verify these labels. But, now people are trying to do even more complicated tasks, for instance with natural language using Mechanical Turk. So, I might ask you, 'Here's two sentences. Tell me if the first one entails the second one or contradicts the second one.' This is like a task that they want computers to do so they need data; and it turns out that people will get those wrong quite a bit. The more complicated the task, the more tricky the whole Mechanical Turk thing is.

29:42 Russ Roberts: Yeah. I think that--of course, that's one of the challenges. But, I think it'll be useful at this point if you could summarize--and I apologize for making you do this on the spot--give us a somewhat summary of where we stand. So, what are the great--let me might try to make the list from what I read from what I read in your book and tell me if I leave anything off. So, a computer has beaten the best chess player in the world. A computer has beaten the best Go player in the world which was-- Melanie Mitchell: One of the best. Russ Roberts: One of the best, which was a game that people thought might not be amenable to a computer doing because it's so open-ended. It's really gotten better at voice recognition, so I can talk to my assistant on my phone. It's pretty good at handwriting recognition. It's really good at certain crude image identifications that we're talking about. The self-driving car thing, it's really good--it's 90% of the way there, but as you point out, 90% of the way takes 10% of the time and the last 10% takes 90% of the time. So, we're not really close despite what I was reading two or three years ago that it was imminent--we're not really close to autonomous driving at what has been called Level Five where you could sit in the back and read a book and enjoy your music and have a glass of wine. Am I missing anything important that AI has accomplished in the last 20 years, say? Melanie Mitchell: Oh. Well, I think there's a lot of rather specific tasks. I mean, one thing is machine translation-- Russ Roberts: Good one. I forgot that-- Melanie Mitchell: between languages. We also have a lot of applications in medicine with medical image analysis, getting medical data and trying to make sense of or make diagnoses from it. There's been a lot of applications in scientific data analysis. Yeah; I mean, it's kind of all over the place but each application is somewhat narrow and it's a particular--you have to sort of start from scratch with building a system that will do that application rather than having some more general AI that would be able to do many different things. Russ Roberts: Yeah, none of it--correct me if I'm wrong--none of it is transferable. So, the computer that can play Go can't play checkers. Melanie Mitchell: That's correct. Russ Roberts: It hasn't, like, figured out board games. Melanie Mitchell: Right. And it can't even play a variation on Go. I mean, there is some very small transfer[?]: There's a lot of work on transferring AI tasks, but I'd say for the most part the state-of-the-art systems are not very transferable. Russ Roberts: So, let's talk about Watson for a minute. Watson which is the IBM computer that played Jeopardy and beat Ken Jennings, the longtime champion, and somebody else whose name I don't know. But that gives the impression that it knows a lot of things. It doesn't just know one thing. But, of course, it knows a lot of things very narrowly. Melanie Mitchell: It has a lot of databases. Rather than saying it knows a lot of things, I would say it has the ability to look up things very quickly on Wikipedia and other big database sites. And, it was able to use some natural language processing to make sense of Jeopardy questions. It did really well. Russ Roberts: It can make some jokes and puns. Melanie Mitchell: Yeah, understand puns. But, it didn't seem like its knowledge was transferable in the way IBM touted it to be. Now they said, 'We're going to send Watson to medical school,' which is kind of a--people took that seriously, but it's just really a kind of a quip, and, 'We're going to have it'-- Russ Roberts: But they meant it seriously in the sense that it would--they didn't put a lot of medical knowledge into the database. That's just not what medical school is, unfortunately. Oh, that it were! Melanie Mitchell: Right. Right. So, now Watson has lots of medical data, and it's supposed to be able to answer questions about the domain of medicine; but turns out that's very different than answering Jeopardy questions, and it didn't do as well. It's a little bit hard to get out exactly what Watson can do now, but my understanding is that Watson that played Jeopardy no longer exists. That program has been completely changed into using deep learning and other current modern AI tools. Just the way that Google and Microsoft and all the other companies do. So, IBM now has what it calls 'Watson' which is just a platform of computing tools. Russ Roberts: It's sad to think that the entity that--I'm being facetious here--the entity that defeated Ken Jennings is no more. And if you saw--no spoilers here--what Ex Machina plays on this, the movie, plays on this human relationship to AI. I don't know if Ken Jennings is sad that Watson--we give it a name, a human-ish name, right, Watson? Melanie Mitchell: Right. Russ Roberts: It's named after the founder of IBM, but you might think it's Sherlock Holmes's partner--which is ironic given that he was sort of the naive, not so smart one. He was the straight-line guy. But I don't think Ken Jennings--do you think he's sad? I don't think he's sad. Melanie Mitchell: I don't know. Russ Roberts: He wants a rematch, and he's gone. Melanie Mitchell: Yeah. Yeah. I mean, I don't know if that Watson could be resurrected or not. Maybe it could. But it's not the same Watson that is being marketed for healthcare, for tax preparation, for legal advice, and so on. That's a completely different set of tools. Russ Roberts: Ideally, it'd be smarter because it's had time to pass and get smarter. Melanie Mitchell: Right. But as you say, data is not knowledge. Data is not intelligence.

36:01 Russ Roberts: So, in all these examples given what--and I think that was a fair summary of where we're at--my take and I suspect it's yours, I'll give you a chance to respond. My take is that almost none of that is what we would call, as human beings, 'intelligence.' Melanie Mitchell: 'Intelligence' is one of those words that means different things in different contexts. It means different things to different people. Here we are sitting in Washington, DC, and I think a lot of the country, a lot of people in the country think, 'Oh, Congress--there there's no intelligence there.' But, when I go around giving talks about AI and I say, 'Well, computers aren't very intelligent yet,' people tell me, 'Well, human beings aren't very intelligent, either.' But they're using the term just very differently. Intelligence isn't just one thing. It's not a yes or no thing either. And I think one of the problems is we don't have a good sense of what intelligence is. We don't understand our own intelligence very well. Our state of understanding the brain is still quite limited. Our understanding of human psychology is still rather limited. And I think intelligence is the one of those terms that's a placeholder for things we don't understand yet. It's kind of a phenomenon that we kind of have a general idea of what it is but we don't know specifically, and it's just waiting for more scientific advances to replace it with something more useful. Russ Roberts: I think it was Rodney Brooks here on the program who quoted, I think it's Marvin Minsky, saying that these are things called 'suitcase words.' And 'intelligence' would be one of those things. We put some things in that suitcase when it's convenient. If it's not, we take it out. But I guess what I had in mind is this idea of transference or connection. What I think of as human. Or, better yet, something beyond what was programmed into it. That would be even a narrower straightforward thing. As far as I can tell from what your overview of the field is in your book, computers can't teach themselves anything except what they've been programmed to learn. They can't go--quote, "to learn"--programmed to input transfer, translate an input into an output. They can't then add something to it. That would be at one measure of intelligence. Or at least I think it's a measure of intelligence. Melanie Mitchell: Right. So, yeah; I mean, it's tricky to talk about this because, of course, if I train a computer program to recognize dogs and images, they can recognize dogs and images that I've never shown them. Right? Russ Roberts: Fair enough. Melanie Mitchell: So, that's sort of a generalization. But, probably if they haven't ever seen it, they can't recognize a dog in a cartoon, or they can't recognize a painting of a dog. So, in AI, people talk about this notion of distribution, which is kind of a statistics idea that your data has a certain distribution. The dogs in your training data have a certain range of features that your system learns and if you show it a new thing that is within that range of features then it can recognize it, but if it's outside of that distribution, it won't be able to transfer its knowledge to that. And that's something that we humans are able to do. One of the things that kind of surprised me: there's a huge focus in AI on this thing called 'transfer learning,' which is exactly what we're talking about. That is: learn one thing, learn to play chess, be able to transfer your knowledge to variations of chess or to checkers. Russ Roberts: Read a CT [computed tomography] scan, then you can read an x-ray, then you could know what-- Melanie Mitchell: Yeah. And this is called transfer learning, and it's huge. But, transfer learning is exactly what we humans call learning. So, what these systems are doing is not learning in the human sense, because we assume that if you've learned something you can use that knowledge in a very new situation. That's still a challenge for AI.

40:14 Russ Roberts: We're in DC [Washington, D.C.] at the DC office of the Hoover Institution. But when I'm out at Stanford, I inevitably get sucked into the tech world. And they're very utopian there. They think tech can solve all problems--a lot of people do think that. And I find that very seductive, because I like to believe that to be true. I find it--there's something comforting about that. But what I've learned in the last five years is how often the claims of these tech evangelists are overstated. I mentioned autonomous self-driving cars--way overstated. An extreme example would be Theranos, which turned out to be a fraud, but the idea that: 'One drop of blood, we're going to diagnose 70 diseases.' That, machine learning is going to solve all problems, or artificial intelligence. There's an enormous amount of hype. Now, some of that hype comes from the media and you give a lot of examples in the book of headlines that were misleading or misdescribing studies that were much more modest. It's an inevitable human problem. And one of the things I learned from your book is just, again, to be so sensitive to that because I'm so--it's so suggestive. Melanie Mitchell: Sure. I think hype has been a problem in AI since the very beginning of the field. People--intelligence--intelligent computers--that's such a--or not even just computers, but machines in general--that's such a long-held goal for humanity. And I think the hype has gotten almost worse, the better AI works. One reason is that it's--there's a couple reasons. One is just whenever a technology becomes commercialized then there's this need to sell it; and so people sell it. I mean, that's just the nature of marketing. So, we've gotten a hype from the companies: IBM advertising Watson is one very salient example of that. But also, I think people often, when we see an intelligence, an AI system like Siri, for example, we tend to anthropomorphize it. It has a name, it has a voice. It almost has a personality. We tend to give it more credit than it actually deserves for thinking or being intelligent or understanding, and that's a very human reaction. And that's also led to some of the hype, I think: is people actually believing in what they say about the intelligence. Russ Roberts: Yeah. I gave my parents an Alexa to help them listen to music, which was a great decision. It beat the other solutions my brother and sister and I tried to come up with for them. But, it's like they have a boarder in their house. They're very close to her. They'll say things like, 'I couldn't believe it. Alexa knew about--' Melanie Mitchell: Yeah. I mean, we saw this back in the 1960s when Joseph Weizenbaum created Eliza, which was a psychotherapist chatbot essentially. And it was the most simple program. It had a few templates. It had some keywords. It was supposed to be a particular kind of psychotherapist. So, if you said something about your mother, it would say, 'Tell me more about your mother.' And it had little templates like that. And, people wanted to talk to it. People wanted to tell their deepest secrets. They really believed that here's finally something that, somebody that understands me, and is willing to listen to me, and really listen to me, because it would take what you said and like play it back and say, 'Tell me more about that?' and, 'What do you think about that?' and 'How do you feel about that?' And, Weizenbaum was horrified, and in fact he became an activist, an anti-AI activist, because of the way that people interacted with this program.

44:29 Russ Roberts: One of the challenges of the depth of these neural networks and other techniques is that there's a certain black box aspect to some of the knowledge that comes out of these systems, answers that come out of these systems. Some of those, sometimes we might not care and we just want the answer. We just want to know whether it's a tumor that's benign or malignant, or whether there's a tumor at all. And how it got to it, we don't need to understand. But a lot of people are very troubled by this. One obvious example that you discuss in the book is bias that's built into AI answers because the data that the AI has been learning on is biased, because it comes from a set of human sources, whether it's those people categorizing the photos or just human language. There's a lot of issues about sexism in particular that I've seen and there's hope we can de-bias some of the stuff. What's your feeling whether we should be concerned about that, and is there is going to be ways around that to help people understand? Like, we talked about this with Cathy O'Neil in our episode with her--issues of sentencing people to jail. This is not like, 'Oh, I want to know the height of the Eiffel Tower.' It's very serious stuff. Melanie Mitchell: The idea of explainability is very tricky in AI. There's almost an inverse relationship between the success of a system and its explainability at least with these deep neural networks. The deeper--and that is like the more layers, the more neurons, the more connections in these networks--the better they tend to do because they're able to model the data more successfully. But, then it's hard to figure out what they did. You have millions of weights or billions even now, and no high level insight into why the machine is making the decisions it does. So, that's a big problem, and a lot of people are working on ways to make the machines more explainable, or almost virtual microscopes that can have you go in, or you might--if you want to make analogy with neuroscience--little probes that can go in and figure out what this artificial brain is doing. But it's certainly an unsolved problem. And I think there's also a problem, like, 'What is an explanation? What counts as an explanation?' There's a philosophical problem, but it's also very real. So, for instance, the European Union has this GDPR [General Data Protection Regulation] law on data, and one of the parts of law is that algorithms that make decisions that affect people's lives have to be able to explain their decision-making. But, what does that mean exactly? Does it mean I have to tell you all the values of the weights? Is that an explanation? Well, no. Of course not. No human understands that. But explanation is subjective. It depends like what the goal is and who I'm explaining it to and so on. So, that's I think a very unsolved issue. Russ Roberts: That's interesting. I never thought about it, but of course human beings can't explain why they do what they do. We lie, we fool ourselves, we self-deceive. Right? The idea of saying, 'No; I know why I gave you this gift, paid you this compliment, shunned you'--there's a thousand reasons. And I don't know if we'll ever understand that about ourselves. But we could understand something about like you say the data, the weights of the mechanical system, [?]. Melanie Mitchell: You know, a lot of people have said, 'Well, humans can't explain their thinking either, so why should we make AI explain this thinking?' But I think that's actually a false argument. Because, humans can explain their thinking. We definitely aren't perfect at it, but we certainly have--you know, when a judge makes a ruling, they don't just say yes or no, but they write a long explanation for their ruling. They talk about how they took into account all the evidence, and all of that. And, when something really matters, like if I say, 'I'm going to sentence you to 20 years in prison because this algorithm said I should,' I think there must be a way to, or there has to be required to be a way to explain what evidence is taking into account. Russ Roberts: Well, I think the challenge--you mentioned Washington, DC and the--if I was uncharitable, I'd call it a sausage factory--you know, government creating legislation. We don't literally get to see all of it. We get to see quite a bit of it now. We get to see votes: that's a start, which would be like the weights. But we understand those weights. We understand that this person voted for that and is accountable at the ballot box. The AI is not accountable in the same way. I think it's the need for transparency and what transparency would mean is something we're going to have to talk about and figure out later. Melanie Mitchell: Yeah. It's absolutely--it's on everyone's mind. But again, like all the other things it's not a solved problem yet. Russ Roberts: Some of it, though, and you're suggesting, may not be solvable--using the current techniques in a way that would be amenable to evaluating whether it's, quote, "fair" or whether it's biased or those kind of issues. Melanie Mitchell: Yeah, I think that's right. You know, I--it's--how to say this? The whole issue of bias is now very--worrying a lot of people. And, our world is biased. We know that and therefore the data that we produce and that we train the machines on is biased. It goes very deep. So, for example, facial recognition performs worse on darker skin than lighter skin, and that's partially because of the biased data that's given, but also I found out that it's cameras themselves, the electronics and cameras are tuned better for lighter skin than for darker skin. So, that's--it's really going to be hard to de-bias algorithms with a biased society where it's so deep and almost invisible.

51:14 Russ Roberts: I want to take one more example from the book that I just loved, and we'll talk in a minute about how thinking about AI helps you with thinking about human beings. It was one of the examples that did that for me. You say, "It's time for a story," and the story is called "The Restaurant." It's a very short story. I'm going to read it: A man went into a restaurant and ordered a hamburger, cooked rare. When it arrived, it was burned to a crisp. The waitress stopped by the man's table. "Is the burger okay?" she asked. "Oh, it's just great," the man said, pushing back his chair and storming out of the restaurant without paying. The waitress yelled after him, "Hey, what about the bill?" She shrugged her shoulders, muttering under her breath, "Why is he so bent out of shape?" So, the question--it's a great story and you riff on it quite a bit in a very effective way to talk about language--how subtle language is and 'bent out of shape' doesn't mean he's physically contorted; the bill that he didn't pay is not a reference to the beak of a bird or legislation passed by a parliament or congress, etc. And when he said, 'Oh, it's just great,' he was being sarcastic. We know all those things instantly when we read the story. The way you summarize it and which I love is: "Did the man eat the hamburger?" Melanie Mitchell: Yeah. That's actually--I kind of stole that idea from Roger Schank, the old time AI natural language person who had these little stories like that and asked those kinds of questions. And also John Searle, a philosopher who used that example of eating the hamburger to talk about whether machines could really understand anything. So, right: I mean, that's kind of the idea that knowledge about--back to knowledge, and our knowledge about the world. We know the man probably didn't eat the hamburger. Even though it's not said in the story explicitly, we can read between the lines. But how do you get machines to do that? How do you get them to have the kinds of knowledge about the world that they could use to make sense of such a story? And it's very hard. Russ Roberts: And of course there are people who misinterpret stories, don't get jokes. Language is hard for us, too. We're really good at a lot of it, most of us; but not all of us are, and all of us struggle at various times and misunderstand stuff. Melanie Mitchell: Well, for instance, here's a self-referential issue. I was recently talking to the person who is coordinating the Chinese translation of my book. Okay? And-- Russ Roberts: Just use Google Translate. Melanie Mitchell: Exactly. So, I talk about how translating this story into Chinese in the book; and now how are they--I'm not sure the Chinese translators are going to understand that story even though they know English pretty well. I mean, it's very idiomatic. So, how do you actually translate something like that without having the sort of cultural knowledge? Translation is really complicated. Russ Roberts: So, it's easy. Just have the translators or the machine watch a couple million American movies and they'll know. They'll just know. Melanie Mitchell: There you go. That's pretty funny because that's actually a strategy for AI common sense that is being undertaken--is, there's a common sense competition on watching movie clips and answering common-sense questions about the clips. Russ Roberts: And are they going to get better, you think? Melanie Mitchell: I don't know. Russ Roberts: You have some very amusing examples in the book of where you take that story, translate it into a different language, then turn it back into English using the same technology. And of course things get mangled, things get burnt to a crisp in the translation. Melanie Mitchell: Yeah.

55:04 Russ Roberts: One of things that your book forced me to do, and I'm curious how you felt writing it, it made me think a lot about what is distinctive about humans. And in the course of our conversation I've given a couple examples where I said, 'Well, humans can't do that either perfectly.' But we do a lot of things shockingly well. You make a big distinction which I like between easy things being hard and hard things being easy. Chess seems hard, but brute force and lots of computing power made some real progress there; but easy things like common sense are really hard. Melanie Mitchell: Yeah. Russ Roberts: Talk about what you've learned about yourself , for humans, in the course of writing the book. Melanie Mitchell: So, I've learned how much of our intelligence is invisible to us : how all the time we're able to make generalizations and transfer what we've learned to new situations, and make abstractions, and metaphors, and so on in a kind of invisible way. We don't even know we're doing it. And this is, I think, is one of the reasons that people misjudged how hard AI would be. We have all these people like Marvin Minsky back in the 1960s predicting that we'd have human level machines in 15 years from then, and it's still going on; and I think that we still don't recognize how much of our intelligence is below the surface. There's been some approaches to common sense reasoning in machines by building in all the common sense. I talked about one example in the book called Cyc, C-Y-C, [pronounced like 'psych'] by Doug Lenat, where the idea was just to have this huge database of common-sense knowledge. Like, 'You can't be in two places at one time.' Things like that. But the problem is-- Russ Roberts: 'A penny saved is a penny earned.' Melanie Mitchell: There you go. The problem is that we can't write it down because so much of it is unconscious. So, now there's this big Grand Challenge from DARPA, the Defense Department agency that funds a lot of AI research, and the Grand Challenge is to create a machine that has the common sense of an 18-month-old baby by going through all the developmental stages that babies go through. And this is the perfect example of easy things are hard, because we have machines that can do all these fancy things like translate between languages and play Go, and etc., but one of the biggest grand challenges and huge amounts of money being put into it is: Create like an 18-month-old baby. Russ Roberts: You gave a great example in the book of Charades, a game that a six-year-old plays effortlessly, would be incredibly hard for a computer. Melanie Mitchell: Yeah. I'll give credit to Gary Marcus for that one. That's his example. Russ Roberts: Here's a quote from Geoffrey Jefferson that you have in the book, a neurologist, and I want to hear your reaction to it. You probably reacted to it in the book, but do it here. You say, he wrote, Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain--that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance), pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants. What do you think of that quote? Melanie Mitchell: I love that quote; and in fact that quote comes from Alan Turing's paper where he proposes what's now called the Turing Test. And I think he brings up an interesting question which is: How would we know? Say we're talking to a machine, and it's talking just like a human--we can't tell the difference. How would we know if it had all these qualities, that it was charmed by sex and all of that stuff? It's really difficult. How do I even know that you've gone through all these experiences or that have an inner life? Russ Roberts: You don't. Melanie Mitchell: I don't. It's the same old question. So, that's sort of--the Turing Test tries to get around that. But it turns out to be a little too easy: because of our human propensity to anthropomorphize, it becomes too easy to pass the Turing Test. Russ Roberts: Yeah. My example is the robot that is a vacuum cleaner, but regrets it never was a self-driving car. We would call that a human experience. Of course, we're not going to be [?]--are emotions relevant at all? Some would say it has nothing to do with it. Melanie Mitchell: Right. I think emotions are fundamental to thought: that's my opinion. But, I don't think we know enough to to say. There's this idea, kind of a classic trope in philosophy of mind, of the brain in the vat. And this brain in the vat, it has input, it has output. It's exactly like a brain. But, it doesn't have any experiences in the world. What's the difference between that and us? Are we just brains and vats that are some simulation that's playing out? That seems like reality? I mean, all of this stuff, all of these old, old philosophical questions, are still here. We still don't have good answers to them. Russ Roberts: Yeah. We have once--and I tried to write an essay on this and I'll link to it, and I'm not going to try to get all the pieces right--but I think I'm channeling Harry Frankfurt and Rabbi James Jacobson-Maisels, in these two, in this thought. But, the idea is that we have wants; but we have wants about our wants, as well. So, I have a desire for ice cream, but I also have a desire not to want it too much. We could program a computer to be rewarded by ice cream. Could it ever get to the point where it felt guilty about that, or uneasy about it, or, etc.? And it's not consciousness, some level of that level of desiring. Not just desiring but having desires about our desires. Melanie Mitchell: Yeah. I think that's a big part of consciousness, is having awareness of our own awareness, having desires about our desires, having emotions about our emotions, etc.--all that meta- kind of stuff. Our intelligence, in humans it's been evolved for specific purposes, I think. And it's not necessarily true that any intelligence would have the same kind of purpose that we have, the same kind of architecture, desires, or goals or whatever. But, anything that's going to be in our human world, that's going to be driving around in our human world with other humans or being our virtual assistant or what have you, that thing is going to have to have some human-like qualities to be able to deal with us. So I think that whatever we build, if we want it to be useful to us it's going to have to have the kind of human understanding that we have.

1:02:43 Russ Roberts: Early on in your book you say something, it's almost--you say in passing, and it actually jarred me. It made me think a lot. You said, 'Google is an applied AI company.' And I was thinking: What an extraordinary thing this is, that this company that started off as a, quote, "search engine," a particularly lovely and useful thing, has transformed itself in this either ultimately terrifying way or extraordinarily exciting way into a million other things. Right? And that is exactly what it is: It's an applied AI company. And it's unusual how much research is going on right now inside profit-driven companies, which I think is glorious mostly--mostly. I'm, something, a little uneasy about it because I'm worried about the feedback loops; but listeners know about that and we'll leave that alone. But the point is, is that a lot of fundamental--Ray Kurzweil works at Google. Which is nuts. It's not obvious that there's anything profitable about his vision. But he's a really smart guy, so they put him on the payroll. And I have a lot of friends who work there who are just really smart and they do sort of think-tank things within this unimaginably profitable company because they can afford to have folks like that that might not turn out to have practical applications. They don't really care; they like to be around them. So, here's this strange company, and my question is--and they're not alone. They're not close to alone: What's going on in China? We had Amy Webb talking about that. It's going on at Apple. It's going on at Facebook. Etc. Is this--how scared are you about the implication of this for humanity? Is this a somewhat troubling thing, deeply troubling, or are you not troubled at all? Melanie Mitchell: I would say somewhere between somewhat troubled and deeply troubled. There's just a few companies that have so much power--because they have so much data--that's one thing--and they have so much kind of control over what we see, what we think, what we do. I find that really troubling. But on the other hand, as you say, in the past it was unusual for big companies to participate in basic research, and that we're seeing a lot of that at these companies. And they're doing great work. Russ Roberts: Yeah. Good for you, good for your students. Melanie Mitchell: Yeah. My students all are working at big companies and, you know, working on really interesting problems, doing things they want to do and solving important problems, I think. But, you know--I don't know. I don't really trust these big companies whose, their motive is profit, to do the right thing. And a lot of them aren't--are doing what I think is not the right thing. I mean, there's a lot of argument about what is the right thing. But I think there's a lot of potential danger for AI, not that we're going to get super-intelligent, a billion times smarter than we are, AI that's going to enslave us, but more that were deploying AI that is not up to the task, that is not general or smart enough to be autonomous. So, Pedro Domingos, who I think you said you interviewed, had a great quote that I put in the book, which is that, 'It's not that AI is too smart and going to take over the world. It's that it's too dumb and it's already taken over the world.' And I totally agree with that. Russ Roberts: You give the example--it's really chilling--of two photographs. You look at it; and literally to the human eye--not just like, 'Oh, there's a hidden thing that you have to look for a while to see it,'--you can't see anything, at first glance for sure, maybe many glances. It looks like the same photograph. But a handful of pixels have been altered, and the algorithm misidentifies the photograph radically. It calls the school bus an ostrich. Melanie Mitchell: Yeah. Russ Roberts: The opportunity for human beings to use AI maliciously, malevolently--forget the kind of bias issues we talked about that are worrisome and troubling--but the opportunity for people to deliberately steer things in ways that would be destructive. I worry a lot about the next Presidential election and the one after that where the ability to create video and photographs that will be indistinguishable from other actual news[?] footage is going to be hard to resist for folks. Both the creators and the viewers. Melanie Mitchell: Right. So, you're talking about a couple of things there. One is the sort of deep fake or fake media like videos. And even language now. We have these language generators that are quite convincing. And how to detect that something is real or fake. That's going to be harder and harder. So, that's one problem. The other problem is the ability that humans have, especially if you know something about AI, to fool AI systems like facial recognition systems or object recognition systems or even language interpretation systems by subtly changing their inputs in targeted ways--so that those are called adversarial attacks, because an adversary can attack an AI system. And people have shown that it's actually not that hard to do. So, the systems are not reliable in that sense if humans are out to get them. And the other thing you're talking about is the systems can fool us. So it kind of goes both ways. So, yeah: I think that the potential for malicious uses of AI systems is the thing that Hofstadter should be terrified about, not that AI systems are going to take away our humanity. Russ Roberts: A lot of people--I think it's absurd and I find it almost offensive--they say things like 'Well, we just need to teach AI researchers more ethics. Make them take a course in ethics so that they'll know about the right thing to do.' That strikes me as the wrong way to solve this problem. Melanie Mitchell: Yeah. Wrong in so many ways. For one thing, whose ethics? For another thing: ethics is a very complex conceptual thing, and computers have no concepts. They don't have--understanding ethics and being ethical is maybe equivalent to being intelligent in a way. I don't think that you can learn ethics on the side as at the same time you're learning self-driving, you're learning how to drive on a highway. Russ Roberts: I was thinking more about your students that, when you teach them, you should make sure that they know to do the right thing. The sort of Google 'Do no evil.' And, there's a billboard on the 101 in the Bay Area which I love. It has the Google slogan, 'Do no evil, and they've crossed out that word 'Do.' It says--no, I'm not quoting the slogan right. What's the Google slogan? Melanie Mitchell: 'Don't be evil.' Russ Roberts: 'Don't be evil.' Don't be evil, as if that's enough. 'Just tell them and explain it to them, that it's bad to be evil.' And of course that's not enough. Melanie Mitchell: Right. Russ Roberts: But the billboard, actually, they've crossed out the word 'Don't' and put 'Can't'--'Can't be evil.' That strikes me--and it's obviously an ad for some piece of software or project that is going to put some kind of ethics built into it rather than just relying on the good-natured training of the programmers. Melanie Mitchell: Yeah. Now, in academic computer science to be accredited as a computer science program you have to offer an ethics class. It's required. Russ Roberts: Oh, phew! Now, we don't have to be worried anymore. Melanie Mitchell: Right. So, I don't think that's going to solve many problems. It's more of a systemic issue about how our society is organized.

1:10:57 Russ Roberts: So, three of the smartest people in the world, at least on paper--Stephen Hawking, Elon Musk and Nick Bostrom, guest on this program--are worried about some aspect of AI run amok. Your book and all of our conversation so far has suggested either they're just simply wrong, or, it's so far away in the future, it's not the thing we need to be worried about. How would you react to their level of anxiety? Melanie Mitchell: Probably both of those things. I think even though they're smartest people in the world, probably a lot smarter than I am, they don't understand intelligence. And I don't claim that I understand intelligence either, so maybe I'm wrong; but I think that the idea that they have is that you can have super-intelligent AI that is just missing--all it's missing is the sort of alignment as they call it with our values. And so what we need to do is make sure that it's aligned with our values, as if the alignment with values is just this malleable switch that you can turn on and off, or the system could learn about values, rather than thinking that an intelligent system is actually a very complicated thing that sort of develops in a society, in a culture, that isn't just sort of created de novo, and would develop values through being embedded in the culture. I mean, that's kind of my view, and I think that they have too simplified idea of intelligence. I wrote a New York Times op-ed about this recently and I was--in there I quoted from Stuart Russell who wrote a book called Human Compatible about aligning AI ethics with ours, or values. And he said, 'Well, what if we had a super-intelligent AI and we charged it with the problem of solving climate change, and it decided that the best way to reduce carbon would be to kill off all the humans.' So, the idea there is that we have a superintelligent AI, it's superintelligent, but at the same time it doesn't figure out that human life is something we might want to preserve. That just seems crazy to me and it just seems like a misconstrual of the word 'intelligent.'