0:33 Intro. [Recording date: September 18, 2014.] Russ: I want to start by saying this is an extremely interesting and provocative paper packed with ideas and analysis but still accessible for the most part to a nonspecialist. We'll put a link up to it and I encourage listeners to check it out after listening to the conversation. I also want to say that this is one of many episodes of EconTalk where we look at the threat and opportunity posed by the increasing role of computers and smart machines in our economy. And I want to let people know that more are coming. So, enjoy. David, let's start with Polanyi's Paradox. What is that and why is it relevant? Guest: So, Michael Polanyi is a Hungarian philosopher and scientist. And he wrote a book--I believe it was published in 1961--called The Tacit Dimension, in which he sort of expressed or articulated the importance of tacit knowledge in human behavior. The quotation I give in the paper is "We know more than we can tell." And then he goes on to give an example of the skill of a bicyclist or an automobile driver cannot be simply articulated in words. Or more concretely, I could give you a day-long lecture on how to ride a bicycle; at the end of the day you wouldn't know how to ride a bicycle without having ridden one. And the point that Polanyi was making is that there are many things that we tacitly understand how to do or are capable of doing that we do not explicitly understand how to do and cannot articulate in terms of a procedure. So, riding a bicycle would be one example, but there are many, many others. We don't know the procedure for coming up with a new hypothesis or for making a persuasive argument or for that matter for recognizing people from a distance or even recognizing someone as they grow up that you haven't seen in 15 years and they've changed greatly--how do you know that's the same person? Or how do you choose the sort of locomotive path to navigate along, up a steep hill, a rocky surface, and catch your falls in real time. These are all things that we do as onboard equipment. Or another great example would be interpreting the nuances of spoken language, more than just the words individually. All[?] the things we know how to do, they are sort of built in, part of our hardware; but we don't know how to accomplish them explicitly. We don't know how to write procedures or describe the rules for doing those tasks. And so the point of this paper is, that makes an extreme challenge for computerization, for automation. That automation primarily works on taking the explicit procedure that we already do and codifying those steps so that a machine can do it in our place. So, when we have a computer program that does calculations or sorts through files or searches for words or helps us lay out a circuit board if you're a CAD (Computer-aided design) user, it's following a set of codified explicit procedures that we already understood and now we have laid out the steps. But hard to do that if we don't actually know the procedure for the thing that we're accomplishing. Russ: One of the things--I think I've probably told this story before, but one of the most beautiful and inspiring and moving videos I've ever seen is one that looks at Andrew Wiles's attempt, and finally successful attempt, to solve Fermat's Last Theorem. He had had this proof of the theorem that was knocked down. But for a while he was celebrated as one of the greatest mathematicians of all time. He must have been exhilarated for a while. And then it turned out: the proof's not right. And then he spent a rather horrible set of time, as you might expect, trying to resurrect that proof. And he couldn't. And to hear him talk about it, he says--and then one day, one day, I looked up or I put my head down and I looked up and he saw how to solve it again. And he can't explain what happened there. He has no idea of that intuition, that aha, that eureka moment. To me it's one of the most beautiful, mysterious parts of the human enterprise. Of course computer scientists wonder and philosophers wonder whether it's just a matter of time before we understand that process. You want to weigh in on that? Guest: Yes, absolutely. First, that's a great example you just gave. And let me emphasize that this paradox--the paradox being that we do things that we don't know how to do--applies as much to the mundane as to the sublime. It applies as much to proving a century-old mathematical challenge--that's one example. But something as simple as walking up a flight of stairs or looking at a garbled piece of text and figuring out what it says, which is of course what you do with these CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) all the time. Russ: Or empathy. When someone walks into a room who has had a bad day and you recognize they need a little affection or kindness or a cold drink. Guest: Right. That's right. So, both of these things require, draw on a set of capabilities that we possess, but they are not in the kind of accessible, analytical, procedural set of steps that we would use to [?] have a machine do the same thing. In terms of our ability to get those things, or to automate those things--clearly, we've made progress on automating many things that initially seemed extremely challenging. So, people actually initially thought, like, teaching a computer to play chess would be just an extremely hard problem because we know that chess masters are geniuses, etc. And as it turned out, actually, that problem was solved relatively quickly. And now, any inexpensive chess-playing computer piece of software can pretty much beat the world's best chess players. And in fact this observation that many things that initially appeared hard turn out to be simple, and things that appear simple turn out to be hard, has a name. It's called Moravec's Paradox. And many things that--artificial intelligence thought it would be easy to have robotic servants that would, you know, empty your dishwasher-- Russ: See Sleeper by Woody Allen. Guest: Exactly. That's right. But it would be hard to get the computer to play chess. Actually the opposite has turned out to be true. And there are two reasons, or two--so, there are two reasons why we've gotten good at some of these things and not others. One is that it turns out that there's a well-known procedure that you could use. So you could algorithmatize how to play chess and find the optimal solution for problems. You could write that out as a set of equations. Now that's not true for chess, actually. Chess doesn't have a sort of closed-form solution. There is no one dominant strategy in chess that anyone knows about. But instead what computers do is they use a lot of processing power to basically iterate through multiple steps, multiple levels of moves, and choose after calculating many thousands or hundreds of thousands of possible sequences of boards, they then choose what appears to be the best strategy, given that kind of forward looking search. And they may also do some kind of database lookup of prior games. What computers do when playing chess is probably pretty different from what Grand Masters do, in the sense that they are using the kind of comparative advantage of computation in doing lots and lots of quick calculations and storing the information accurately. Whereas chess Grand Masters are probably much more likely to be using a kind of mixture of intuition and recall of previous games-- Russ: Pattern recognition. Guest: Exactly. Russ: Elegance. Guest: Yeah. So the question is: Will we get better at all these other things the same way we got better at chess? And I think the answer is: Probably in the long run. Very few people doubt that in the long run, over 10 years or 20 years or more likely 30 or 40 or 50 years, many of these problems will make substantial progress. It's more a question of a). how long it will take, how hard the challenges, and b). how we'll do it. Will we actually have machines that will do what we do, in terms of having intuition or figuring out problems in this kind of nonprocedural--as far as we know--way? Or will they in fact, will we recast the problems or recast the technologies so they do it very differently from what we do and yet still successfully.

9:47 Russ: Talk about the driverless car. Which we've talked about a number of times on the program. There's a really beautiful de-romanticization of it in your paper. Guest: Okay, good. So I [?] lay out two sort of central strategies for dealing with what I call in the paper nonroutine problems--problems that we don't know the explicit procedures. One of those strategies is what I call environmental control. Environmental control means simplifying the environment so that something that is far less flexible than a human being can manage in that environment. So, you know, just a very literally concrete example: If you think about modern automobiles, they are reliable, they are fast, they are safe. They are extremely competent. And yet, actually they require smooth, even surfaces with shallow grades and turns that are not too tight. Something that would never occur in nature. And so, what do we do? We change the environment to make it car-compatible. So, it's estimated that an area the size of Ohio in the United States is covered by impermeable surfaces, most of which are roads. So that's why I said, a 'concrete' example. So, environmental adaptation is applied in lots of areas. Assembly lines, basically make things very predictable and consistent; that's why it's easier to use robotics, robots in an assembly line, where there's a very narrow scope of activity, than in an uncontrolled home or work environment. Russ: [?] Guest: Exactly. So, I make the point in the paper that actually this environmental control is also applied very frequently to computerization problems--that you basically make the environment predictable in a way that allows machines to adapt to it. And the example I give, of the Google car, is: Many people think of the Google car as being flexible, like a human driver, right? Russ: Smart. Guest: It goes--yes, smart. And it is smart, in a way. But it's not--if you were to put the Google car in a--just drop it down in the middle of a city, it had not been prepared for--and I'll explain in a second what prepared means--it would have to stop. It cannot in real time recognize a road, figure out where the traffic lights are and the stop signs, determine what the routing is, determine what the speed limits are, and so on. It's not that adaptable in real time. Instead, the Google engineers basically go through their mapping software and then hand-curate the maps on which the Google car, for the roads the Google car will navigate, identifying all the stop signs, all the traffic lights, all the routing and speed limits. The Google car still has to be adaptive in the sense it has to, you know, recognize objects in its way--other vehicles, pedestrians. It needs to tell whether a light is red or green. Etc. But it doesn't have to figure out the entire environment in real time; and in fact if it encounters a substantial deviation from the environment it's anticipating, it needs to stop. So, if there's a signal man in the road--so if there's changing the traffic routing--at that point the Google car has to seek control to the human driver. It's not that adaptive. So, what I think, the paper somewhat is a little unfair: Although the Google car appears as flexible as a human being, in some sense it's more like a train driving on invisible tracks. Now, that's a little bit of an overstatement, because of course a train doesn't recognize pedestrians and cars and come to a stop. And a train can't swerve out of the way. But nevertheless the Google car is, the tracks are sometimes laid out for it, and then it needs to only react to deviations from the tracks it hopes to be driving on--obstacles in it's way. Russ: Yeah. It's a fascinating example. I guess the question is: As our knowledge advances of the brain, are there going to be limits? Now most--Robin Hanson on this program, long, long time ago, basically said, 'you know'--and a lot of people agree with him--'it's just chemistry; just a matter of time before we figure it out.' And that might be 10, it might be 20, as you said, might be 50 years. The real question is: Is it going to be possible to ever understand what that Grand Master is doing, what Andrew Wiles is doing, through a chemical analysis of what's going on in the brain? I don't know--it's an unanswerable question-- Guest: Yeah. I don't think the answer will be through a chemical analysis. I mean, in the sense that, even if you thought, you said, I'm going to understand what a computer does through an analysis of silicon. Russ: Yeah, exactly. Guest: Right? That wouldn't actually be informative. Because it's actually about information. It's about symbolic processing. Right? So, the physical structure is actually somewhat divorced from the kind of meta-information-processing structure. That doesn't mean the brain can't be understood. I just don't think that a chemistry set is going to do it. It's really understanding--yeah.

14:53 Russ: No, I think that's exactly right. And I think what's fascinating about all these examples--and we're going to get into this a little bit more and then we are going to turn actually to data rather than speculation, which is probably a good idea. But what's fascinating to me is going back to a recent EconTalk episode with Paul Pfleiderer, where he talked about the tendency of economists to employ the Milton Friedman 'as-if' hypothesis. So, we don't know what managers really do. Or we don't pay attention, actually to what they really do. We posit something. And then we say, it's as if they do it this way. And I think we're doing this with this smart machine stuff. We are saying: Well, we don't know what a Grand Master does playing chess. But I'm going to act as if it's what the computer's doing when it sifts through thousands and thousands of different opportunities. But that's not what the Grand Master's doing. It may be somewhat predictive of what a Grand Master might do most of the time. It's not the same thing. You can't then go the other way and say, So we could just develop a computer, then; since we can develop a computer that acts as if it's a Grand Master, then we can develop a computer that is a Grand Master. They're not the same thing. Guest: Yeah. There's a big debate. You know. There have been many decades of setbacks and some successes in artificial intelligence (AI). And there's a lot of debate among our artificial intelligence researchers about how you should go about trying to master human activities with machinery. So, one school of thought is we need to learn from biology itself. That basically, if you look at the way the human brain recognizes objects--visual recognition starts about two cells back in the eye. It's not like it's just a data receptor call and then it goes to a central processor. There's a whole mechanism that's sort of built in for pattern recognition that is sort of fundamental to the hardware itself. And so some people think we have to learn from the example of how biology does it. Others say: No, no; actually all we need to do is have a conceptual model of the world, and then the machinery, based on the conceptual models, can process the information it receives and figure out what's what. It doesn't have to look, doesn't have to work--an airplane doesn't have to flap its wings to fly. And then there's a third school of thought--and there's many variants in between--that says: We don't need to do any of that stuff; we just need machines to learn to behave like us by basically learning from the world. So the idea of machine learning-- Russ: Talk about that. Guest: So machine learning is the idea--so Polanyi's Paradox is we know what we can tell; we do things but we don't know how we do them so we can't explain them. This gives rise to the idea of machine learning: instead of trying to write down the procedure that we don't understand for doing something, why don't we have the machine, a machine, that looks at examples, correct and incorrect answers, and then infers what is the procedure or a set of statistical connections that makes that the right answer. This is--a kind of iconic or at least highly discussed example was the Google cat recognition software, hardware thing--came from Google X Labs and it used 16,000 processors to parse through a database of millions of pictures of things that were cats and were not cats. And without any specific model except being told this has a picture of cat, this doesn't, and circles the cat. It would then attempt to look at new pictures where the cat was not circled and say: Which of these photographs or drawings contains a cat? And it was--if you look at the pictures which it recognized--one example is included in my paper--it's pretty good at mostly, it recognized a bunch of cats. One thing it viewed as very likely to be a cat turns out to be a pair of coffee cups next to one another. Russ: And coffee cups don't need a litter box. Okay. Guest: That's right. So, what is it doing? It's basically--it has no model in mind of the world. It doesn't say, a cat is a feline, is a biological organism with four legs, etc. It just says, here's pictures of things that you've told me are a cat, and I'm going to look at their statistical properties and then try to predict or infer in other pictures what else might be a cat that fits that description in some sense. The description you haven't given me but only just shown me examples. So, that's what someone would call a scorcher or brute force approach to learning. Russ: Big data. Guest: It's a big data approach. And it also has strengths and limitations. Strength of course is it requires only processing power. It doesn't necessarily require a huge amount of analytical infrastructure to build a model to teach about thinking about a cat. Its disadvantage is it may be fairly brittle and not very general in the sense that without reasoning about what the object is or the thing you are trying to recognize, you could get things that would be statistically unlikely and yet clearly incorrect. So, like the coffee cups. And that no human, no 4-year old kid would make that error, would look at that picture and say--even a kid with 16,000 processors would look at that picture and say, oh, those coffee cups are cats. They would see the difference. Perhaps the reason they would see the difference is because their reasoning about what makes a cat is much more sophisticated than the simple statistical features. Someone who understood a cat as an animal would say, well, it has to be an organic creature. What would be [?] being organic? Well, it wouldn't have a smooth ceramic surface. But of course that's a step, many, many more steps down the line of reasoning than simple recognition. It requires some knowledge of what the object is in the world. Which is a much harder problem. So, I give the example in the paper--in fact I'm quoting from a paper in the computer science community called "What Is a Chair?" And it talks about the difficulty of teaching a machine to recognize a chair. Because chairs come in a huge variety of sizes--not just sizes, features. For example, does it have to have a back to be a chair? Well, no, of course not; we know there are backless chairs. But then how do you distinguish a chair from a table? Well, you'd have to sort of look at the dimensions and sort of thing about: does it look more table-like or chair-like? What does that mean? And then--the example given in this paper that I quote in my article is: If you looked at a traffic cone and a toilet seat, you would say, well, they both look somewhat chair-like. They have a base; they have a top. Russ: Height's about right. Guest: Yeah, exactly. However, if you reasoned about human anatomy, you might think to yourself, well, a traffic cone wouldn't be that comfortable to sit on. So probably that's not a chair. That requires reasoning about what the object is for, not simply what physical features it has in a very basic sense. So that's a harder problem. So there's a divide in the computer science community--not a civil war, but a divide: Can we just do this by brute force--i.e., pattern recognition? Or does that in some sense, what I heard someone say, one of my MIT (Massachusetts Institute of Technology) colleagues is: Well, it gets it right on average and misses all the important cases. So, I think there is a great deal of uncertainty about what routes will prove most productive. I think all of them will, to some degree. So it's more a question--I think a lot of the debate is how fast things will progress. How quick will this move? Will these challenges be quickly surmounted as some people [?] believe? Are we still decades away from having, you know, domestic robots like Woody Allen's Sleeper?

23:17 Russ: So, let's move into the less speculative part of this issue, which is: Regardless of where we are going to end up--which is an interesting question to discuss, and that's what we've been discussing in this opening--regardless of where we end up, it's clear that computers have done a lot of substituting for some human tasks, because computers do it well and do it cheaply. And this has people worried that even if all the jobs aren't going to be eliminated, many or most of them will. We have computers diagnosing cancer, replacing potential high-paid doctors. We have computers doing all kinds of things they couldn't do 5 years ago, 10 years ago, certainly 20 years ago. So let's look at the impact of this so far. And you point out in the paper of course, this is very old worry, that technology is going to replace human employment. I just find it fascinating that the world right now kind of divides among pundits and even some economists: Well, people have always worried about it before, but they've always been wrong. Technology has been good for human beings; it's created other kinds of jobs at the same time. And the second view that says: This time's different, because this is going to get rid of all the jobs. Or almost all the jobs. So tell us what we know about what's actually happened, because there's some very interesting patterns. And you're the only person I know of who has actually looked at what's going on and at a more micro-granular level. Guest: Well, many people at Google[?] have worked on this at this point. But, let me sort of make three points. One, there is a long history of concern about the impact of automation on employment. The most famous example that people give is the Luddites, who were 19th century weavers who rose up against the power frame, the power frame loom, because they were afraid that it would it would reduce employment and earnings. And they very well may have been right for themselves. Because it did take scarce artisanal skills and basically substitute them with basically machines and children doing those jobs. In the long run, of course it didn't reduce employment. But it probably had significant distributional consequences. But more recently I don't think most people are aware of this: This concern, again, arose in the early 1960s under the Johnson Administration. And there was a Commission set up to investigate the productivity problem. And the productivity problem was that productivity was growing too fast. And the concern was that would mean there wouldn't be enough jobs. And in fact the U.S. Department of the Interior was in on this, talking about the potential 'leisure crisis'--the crisis being there would be too much leisure. Department of the Interior, I think they--they run[?] the national parks. And so they would have to deal with all these leisure seekers. So, the concern--so, I think those concerns, obviously we had a bunch of good decades and we didn't feel that we had any leisure crisis. But the concern has re-arisen. So, if you look at the Chicago Booth Poll of economists, the vast majority are saying in the long run there is no evidence that talent[?] technology has reduced employment. But then if you ask about sort of stagnating wages over the last decade, and the role of information technology, a kind of plurality of economists think that there might be a direct connection there. So, my work--with many coauthors including Frank Levy, Richard Murnane, and Larry Katz, and David Dorn and Gordon Hanson and Daron Acemoglu--I could go on, a whole list of notables but people who have been extremely valuable and insightful as coauthors. We have sort of pointed out the role that computerization has played in changing the occupational structure. In particular in displaying routine codifiable tasks, going back to the beginning of our discussion, which would mainly mean jobs in clerical, administrative support, some degree sales, and also many production and operative positions. All of which are skilled work that use a lot of codifiable, procedural activities. And those things, even though they are educated tasks, increasingly are relatively automatable. And the one consequence of this is that you sort of see this what someone would call a polarization of employment, that, on the one hand we don't have computers substituting for people who are doing professional, technical, and managerial tasks--you know, things that require intuition, creativity, expertise, and a kind of a mixture of fluid intelligence with technical knowledge. So, if I'm a scientist, I have lots of technical knowledge but I also need this kind of fluid intelligence for developing hypotheses. If I'm--same as if I'm in sales and marketing, if I'm an attorney, if I'm a medical doctor. So on the one hand, creating an increasing role for these very highly educated skill jobs that are not only not directly substituted, but substantially to an important degree complemented by information technology. Because information processing is kind of input into these occupations. On the other hand, it also leads to a relative growth in many in-person service jobs, like food service, cleaning, landscaping, personal care, home health aids, security guards. And these require tasks that have proved very difficult to automate, as per Polanyi's Paradox. But the irony is that the supply of workers who can do them is quite abundant. Right? So, although it's hard to develop a domestic or restaurant-serving robot, it's not hard for a person with their full physical faculties to have that job and do it productively, with very little training. Many people, it can be in the course of a couple of days. So we have a simultaneous growth of high-education, high-wage jobs, and relatively low-education, low-wage jobs. And those jobs are low-wage for a reason; and the reason being that the supply of workers who can do them is potentially extremely elastic. So it's hard for wages to rise in those activities unless essentially there are better opportunities elsewhere in the economy that you have to bribe people not to take. That's the sort of economic explanation: Why would barbers' wages rise over time even though barbers don't get any faster cutting hair? Well, the answer is because they need to be compensated for not doing something else in which, where they would have rising productivity. So people have to be willing to pay more and more over time for those type of jobs. So, we talk about this polarization phenomenon. And this has been documented across not only in the United States, but at least 16 European Union, EU, economies and the work, recent work by Alan Manning, Maarten Goos, and Anna Salomons. And it appears to be a pretty broad, widespread phenomenon, this decline of many of these middle-skill office and production jobs and relative growth of both high wage, high skill and low wage, low skill jobs.

30:18 Russ: So it raises a bit of a specter. And Tyler Cowen's book, Average Is Over, and the episode we did with Tyler on it. It creates this vision of the future if these trends continue which seems a bit ominous. A lot of successful, highly paid, skilled people, and [?] a much larger group of unskilled and poorly paid people. We've been talking about the employment data so far. Talk about what we've learned from the wage data. Guest: Well, so, the wage data are--it's a great deal more complicated. Though, there are several mechanisms by which that changes--so there are changes in task demands, translate into changes in wages. The three of them that I outline in the paper, one is: are you directly substituted or are you more likely to be complemented? Right? So, if you are a person who is an accountant who can only add and subtract, and you are a bookkeeper, clearly automation devalues your skills. You can just do that more cheaply with a machine. On the other hand, if you are an accountant who understands sort of the conceptual basis of the business and spots problems and creates, you know, valuable kind of record-keeping ideas or ways of organizing information to augment operation, then you are complemented by computerization, because of course you can accomplish more of the things you are good at in the same amount of time, because you have all this hardware to help you do it. Right? So in general I think it is vastly underappreciated in discussions of automation that automation generally complements us by substituting for the things that are time intensive and allowing us to focus on the things in which we have value added. Right? Russ: This goes back, by the way, to Adam Smith, when he talks about the division of labor being limited by the extent of the market, and he talks about the application of technology making people more productive. And it's very important. It has a long history. Guest: That's right. Adam Smith mostly invokes on the division of labor as sort of a way of increasing productivity. I don't think he fully appreciated the role of capital augmenting labor. Russ: I think he did, actually. Well, I'll send you an excerpt. We'll put it up. He talks about the kid doing some mundane task thinking about a way to mechanize it and thereby not have to work as hard; and making the thing more productive. It was primitive. He wasn't thinking about it. Guest: Okay, I stand corrected. That's fully[?] fine. Russ: It's surprising. Guest: So, I think it's easy to see the many ways in which machines substitute for the things we used to do. And then what's harder to see, typically, is: how is that complementing us? But of course you sit back and say, could you and I actually be having this conversation, could you and I have a podcast? Could I actually do significant[?] research as an economist without all this sort of hardware increasing my output per hour? The answer is: Not very well. So, on the one hand is the: Are you directly complemented, versus substituted? Second factor that affects how that automation affects your earnings in a given activity is sort of, the elasticity of final demand--so, in other words, if we get really productive at something but there's a fixed amount of it that people want, then eventually they just buy less and less of it. So you see, for example in agriculture, the vast increases in productivity in agricultural stemming from the green revolution and so on, have eventually reduced employment dramatically in agriculture. And the reason is that, all evidence to the contrary, there seems to be a finite amount that we can eat. Russ: It's a great example. So, food is incredibly cheap, which is a glorious thing; but it doesn't lead, therefore, to: Oh, there will be more farmers. There are fewer. Guest: That's right. That's correct. And other places, that's not the case. So, in medicine, for example, we get, we are much more productive at medicine than we were 50 years ago or 100 years ago where we mostly harmed people. Now we do lots and lots of useful procedures, and demand seems to be extremely elastic. So we spend more and more of our money on medical care because it becomes more and more valuable as it becomes more productive. Russ: Well, we've subsidized it also, so it's a little complicated. Guest: Well, right. There are many factors. But--okay. Russ: But we do want it. Guest: Right. That's right. And then--so, this again, many of the professions, people seem to demand more of them as they get better. Medical care or a lot of professional outputs. And then the third factor that I think is extremely important is kind of elasticity of labor supply. So if there's an increase in labor demand for medical doctors, or they become more productive, so people are more willing to go to the doctor, I can't just read about it in the newspaper and say, Oh, great, all this demand for doctors, I think I'll start being a doctor tomorrow. Because it takes years and years to become one. So, productivity increases in those occupations generally will translate into wage gains, because you won't have very rapid numerical increases in supply. Russ: But it's not just the time. It's the fact that not everybody is capable of doing it. Which is I think also very important. Guest: True. That's right. Russ: Which I think means those wages will be [?] Guest: You could argue the supply is even less elastic because it's not just how long you have to go to school, but who is suitable for that work. Now, let's take these three points and bring them over the last side, the low-education side of the labor market. And say, we have, you know, automation, how is this affecting employment in, say, housekeeping? Well, gosh, there isn't really that much substitution of machinery for housekeepers. But there isn't that much complementarity, either. You could imagine that technology would somehow increase the productivity, the amount of housecleaning someone could do per hour. But it's hard to see where that actually happens. Russ: The vacuum cleaner is already invented. So that was good. That was complementary. Guest: That's right. That was complementary. Absolutely. Russ: But it's over. Guest: Exactly. The second point: well, what about the elasticity of demand? Well, it turns out that actually, there's not a lot of evidence that those personal services are very price-elastic. But they are elastic to overall societal wealth. So, when income goes up, people spend more on those types of those things. So economic growth can certainly be beneficial for those type of activities. But now let's imagine, take the best case scenario: so we have, let's say there's some productivity increase, and then there's economic growth so demand for these personal services rises. Well, what happens? Well, supply of labor to those activities is potentially very elastic because there's almost no barrier to entry. People can do them and be productive really rapidly, and they don't need specialized skills or training. That means that it's hard for wages to rise in those activities quickly, because labor supply will tend to dampen that rapidly, especially when people are being kind of displaced from middle-skill jobs. So, we have seen growth in personal services, but only in the 1990s when labor markets were extremely tight. And otherwise, even though employment growth has been rather polarized with growth at the top and growth at the bottom, wage growth in the 2000s and in the 1980s was not polarized. It was rising more at the top and falling more at the bottom. So, the point I make in the paper is it's easy to understand how these technological changes affect the shape of employment growth--what activities are demanding more and less labor. The actual implications for who earns what are mediated through other general equilibrium forces that, you know, tend to benefit the high skilled and don't seem to be nearly as beneficial for low-skilled workers or low-skilled occupations--low-education occupations, low education workers, even as numerical employment in those activities rises.

38:20 Russ: So, that's kind of what we'd expect. Right? The idea would be--you go back to earlier times of technological change. When the car comes along, blacksmiths don't make so much money any more. Like, zero, all of a sudden. And they have to turn to something else. And for many of them it's late in life; it's hard to tool up. What we'd hope would happen, in the current scenario, and we've seen a little of it, is that it is the high-end jobs, the jobs that require more education are paying a lot more that would draw people into high end skill acquisition. It's been somewhat surprising though, as you point out in the paper, that there hasn't been more of that. And I want to add two things to that and then let you react. One is: there's a sort of--it's all well and good to say we need more STEM (Science, Technology, Engineering, Mathematics)-trained people. But maybe people have limits in how many people can do STEM stuff. And so, this problem is just perhaps going to "get worse." It's not obvious that--and then, people who do go on to college don't always study those things. Either they can't or they don't like them or they are not thinking about it enough, or don't want to think about it; maybe they shouldn't. But it seems to me that this application--so, I'm trying to summarize what you are saying here and then I'll let you react. That at the high end, we have a pretty healthy situation. And we see that high-end skills situation, we see that--the unemployment rate's very low, the wage growth is healthier. The bottom 75%, or whatever the number is--maybe 25%--but somewhere in that 25-75% range, it's not going so well. And it's not obvious that that's going to change through natural responses. That I think is the worry. Guest: Yeah. I think that it is the supply response to the rising return to education in the United States over the last 35 years, really. It started rising in 1980. And it's basically risen almost continuously to the present day--it hasn't grown as much in the last 5 or 8 years. But it's applied[?] to an extremely high level. The supply response has been surprisingly slow and weak. It's been much, much stronger among women than among men. And you know, women vastly outnumber men in college education at this point. And also are outnumbering them in the professions. And this has been a big puzzle. I don't think anyone knows the answer for why that is true. We did actually see, in the 2000s, we did see an increase in high school completion rates, the first time we've seen that in decades. And an increase in college attainment. So, it's not that the message isn't getting through, but it's getting through very, very slowly. Now, I think part of the reason--this goes back to your question of why aren't more people doing STEM--it's now pretty clearly documented that your college major matters a huge amount for your subsequent earnings, and STEM workers do earn more. And sociology and clinical psychology majors earn a lot less. And that information is known: why aren't people switching? And I think one reason--and partly it's just taste; some people just really don't like doing those things. Part of it also is poor preparation. The United States has a very weak STEM education in secondary, at the high school level, and even the middle school level. And so we're way behind other countries. And so that makes it much, much harder for people to enter those activities. Even when you teach--we have Ph.D. students in economics, [?] teaching my Ph.D. students in economics at MIT, but the people who go into the fields of theory, sort of extremely mathematically intensive, they are basically--many, most of them are from other countries, and they have very deep math backgrounds by the time they are through high school. And there are very few U.S. students. So, if we get a Liberal Arts student who enters MIT's economics program, it's very unlikely they are going to become an economics theorist. Now, I'm not saying that's a great loss to the world or to them; but that road is already blocked to them because they didn't have the foundational preparation when they were younger. So, they'll do other things in economics, very useful things. I'm not a theorist; I'm not sad about that. My point simply being that the foundational skills in STEM, particularly in the mathematics and analytical training, need to come pretty early. So I think many U.S. students are not prepared to enter the fields that in fact would be more remunerative as a result of shortcomings in our primary and secondary education system. Russ: We've talked a lot about that recently, and we'll continue to talk about it. That's another issue, obviously. Russ: So, but is correct to say--I want to make sure people understand the underlying economics here. Is it correct to summarize what you've been saying as follows: At the upper ends of the skill distribution, technology complements--that means, enhances--one's productivity. At the lower end, not so much. So that to some extent the wage effect is not going to be as large. And is, therefore, the middle being reduced dramatically, the employment opportunities of the so-called middle--and in the wage distribution? Is the wage distribution polarized? Is it bi-modal in any way? Guest: No, it's not as bimodal because wage growth has been so weak at the bottom. So that was the point I was making earlier, that the sort of the decline to the middle, sort of cascade downward, because people who are in these middle skill activities can easily move into personal services. Right? So if they are displaced, they are going to put pressure on wages in lower wage activities as well. So, even though the employment is fairly polarized, wages were polarized in the 1990s but otherwise pretty much it's been, looks like a downward escalator. But I guess--so, a couple of points on this that I want to emphasize. I think that one should not assume that polarization, even of employment, will go on forever--that the middle will just collapse to zero. I think that that's--it's always dangerous to just take the current trend and just forecast, extrapolate linearly to the vanishing point. That's not likely to happen. A lot of the possibilities for that substitution may already have occurred. And when you look at what's left, "in the middle," actually the jobs become more skilled again. So, we have many fewer typists and filing clerks than we used to, but the people who are clerical workers have more skilled jobs than they used to. They are people who organize travel and work out logistics and deal with hard problems--like, how to get reimbursed. And, you can also--a nice place to look is in the medical world. So, there are lots of medical technician jobs, some of which don't require a college degree. But they virtuously combine a set of technical skills, these sort of fluid intelligence skills, so being a nurse, being an x-ray tech, being a phlebotomist. And those things pay pretty well and arguably will be growing. I think partly because of the aging of the population. I think there will be--there certainly are going to be highly paid, good career jobs in medical, in medical technical jobs, in the skilled trades, like for example construction or electricity or plumbing; in skilled repair. And sometimes in this maniacal focus on college for all, we've sort of forgotten that there is a whole set of skilled vocations that again I think are complemented in the sense that they combine expertise and technical knowledge with these very-difficult-to-substitute human capacities. Right? The complementarity is there.

46:34 Russ: So, let me bring us back to our earlier discussion. I can easily imagine a world where a robot--and I know people who literally work on this now--where a robot would give me--I'm in the hospital, God forbid, and I need post-op (post-operative); first of all the operation, clearly technology right now, has incredible complementarity with surgeons using robotic devices. But we could imagine a world where I don't even have a surgeon. The robot takes up my kidney or whatever it is--it's imaginable. But this is much more imaginable: in the post-op, an arm, a robot arm gives me my, dispenses the right amount of pill for me that I need. Maybe not just dispenses the right kind of pill but knows my history and does some diagnosis of me and knows I need a different dose than the person one bed over. It does a whole bunch of things that a human being could do. What it can't do, at least right now, is make me feel better and show empathy. That ability remains a human, I think will remain, a human thing. But that skill--the question, those high paying nursing jobs--they might be gone in 20 years, most of them. Guest: Sure. So if those things are not complementary, then if we just have people who are paid empaths, who have no medical knowledge and don't need any, those aren't going to be highly paid jobs. There has to be a skill that they possess that is genuinely scarce. Russ: Well, I don't know about that. Being empathetic is pretty scarce. Doing it well--doing it well. Guest: Perhaps. Okay. Look, there's lots of people--okay, let's leave that alone. I do not foresee a time anywhere in the near or even relatively distant future where all the skilled activities are done by machinery and what's left for people to do is sit around and emote. I think there's a lot--in medicine there's a huge amount of skill in diagnosis and figuring out what someone's actual problem is. And it's more than just a chemical problem. It's a sort of set of complementary activities that allow that person to recover. Actually, my sister-in-law is a person who goes and helps elderly and infirm to sort of figure out a workable life. Russ: That's huge. Guest: She's a highly, highly trained nurse. But she's also a person who is a problem solver. I think there's lots and lots--I think in general people are complemented much more than they recognize by the things that make us more productive. In a variety of ways. My sister-in-law is complemented by her modern automobile that gets her reliably from one person's house to another. Even though, of course it means there's in theory fewer people like her needed in the course of a day to reach a given number of people. Russ: Which is a good thing, because it means it's cheaper and people can afford it; and there's a very elastic demand for those services. Guest: Exactly. It's difficult--I think the challenge is, though it's a challenge for our imagination, maybe, because we're being too optimistic, try to figure out: What will those new activities look like? So I give an example in the paper: at the turn of the 20th century something like 38% of all U.S. employment was in agriculture. A hundred years later, 2% of all employment was in agriculture. I think if you'd asked farmers at the turn of the 20th century, 'What do you think everyone will be doing a hundred years from now?' and especially if you told them, 'And by the way, only 2% of the people will be in farming'--they would have known--they knew at that time that farming was declining. In fact that was the genesis of the high school movement in the United States, to send everyone to high school, because they recognized the future was off the farm. But they wouldn't have been able to say, Oh, I think it will be software, health services, business services, entertainment, hotels, the movie industry, video games. That would have been impossible to predict. And similarly we find ourselves at a point where we've gotten a lot faster, a lot better at automating a lot of things that we thought were very readily automatable. I mean, just to give you one personal example, I remember some 25 years ago I was working as a temp at GTE[?] in their library, and I was working as a kind of librarian assistant. And a guy came up to me and he started telling me about this thing he was doing on his computer; and he was going to have the computer search, help you find articles that you needed. And I said, 'How are you going to do that?' And he said, 'We'll just read through the abstract.' And I said, 'But it doesn't understand language.' And he goes, 'Oh, no, no, no. It'll just recognize key words.' And I was like, 'Oh, yeah, good luck with that.' So I couldn't have been more wrong. But the point was, we have gotten good at things that we thought were exclusively human. We've learned how to automate them. So we're at a period where all of a sudden we see the frontier advancing very quickly, what we can automate, but we don't know what is going to replace it. And we sort of feel, is this time different? Is this the time in which all of a sudden we just run out of things for people to do? My own view is, no, we won't. On the other hand, it certainly is disruptive, and certainly for people who don't have some complementary skills, it's bad news. So if you took the workforce of the turn of the 20th century and brought them to the 21st century, many of them would be unemployable, because they would be innumerate and a substantial fraction would be illiterate as well. So you have to have skills that are complemented by a sort of modern set of demands, and many of those skills are a combination of our onboard equipment--i.e., allow you to work as a housecleaner--plus a set of scarcer skills that we want to combine that allow us to add more value.

52:34 Russ: Well, to take a recent episode that we have a lot of knowledge of, when outsourcing--that is, sending jobs overseas, sending tasks overseas--started to grow dramatically, a lot of really smart people said, 'This is different from the usual gains from trade, and it's going to have enormous impacts on U.S. wellbeing and workers' wellbeing.' And there were, as you point out, there were some workers who had a negative impact from it. But people grossly overestimated that trend. They extrapolated it way too quickly. They neglected the gains from keeping stuff close to home, that distance still matters. And I think the punditry and lots of well-trained economists overreacted, listening to them, overreacted to those fears. I think the issue here is, as you say, whether this is different. And I think what makes your--talk about this issue that your colleagues, Brynjolfsson and McAfee argue we should race with the machine. That is, as you've pointed out I think very elegantly in lots of analysis in our conversation and in the paper, there are a lot of ways that technology helps us, makes us more productive, and creates new opportunities that we can't, as you say, imagine. The question is, other than that nice platitude, 'race with the machine,' which sounds nice, instead of racing against the machine like John Henry, what is it that actually mean? My suspicion is: There's no way to forecast that. Obviously those 19th century, turn of the 20th century workers, wouldn't be very useful now. We do need a better school system and we do need ways for people to be interactive with machines that only a small fraction right now can do. But having said that, I think there is grounds for optimism in just human creativity in coping with it. So, I'm not as worried as the pessimists, but I do think making our school system more flexible would help a lot. Guest: I agree. I don't mean to be Pollyanna-ish here. I find myself in an ironic position, in that I've been arguing for the last 15 years that computerization has had a very large effect on the labor market, and I've sort of been in some ways out in front of that argument. I'm now telling people not to panic. Because I think they are missing, or--they have [?] convinced: Computers are substituting for people. And they sort of have forgotten the second half of that: that that means they are complementing us in other ways. But absolutely, I think the human skill that is most complemented by automation is flexibility. And the thing that makes us flexible isn't just muscular flexibility. It's problem solving, mental acuity, the ability to apply fluid intelligence to apply to unforeseen situations, whether they are personal interactions or math proofs or scientific hypothesis formation or persuasion. And so the thing that we--when people say, 'Should my kids be spending all their time studying Java?' I'm like, 'No.' I think they should learn very strong math foundations. They should also learn how to write. To speak effectively, and to work in a team. And those skills are very broadly applicable. And one of the great advantages of people is that they are adaptable, and they are able to reinvent themselves, because they have the foundational skills that allow them to do that. And so in a time of change, being adaptable is a valuable fundamental skill. Russ: Yeah. I used to argue that that's why you should go to college and get a general set of skills. I now think college is overrated. But the parts of college that people ought to be focusing on are math, communication. It's not all STEM. STEM helps, but then there's communication; there's, as you say, problem solving; creativity. These are all things that can be improved through thinking about them a little bit, anyway. Guest: Yeah. Well at MIT, we're always telling people to learn how to write. Everybody can solve a differential equation, but they can't write a paragraph telling you how they did it. Russ: Yeah. Neither could Andrew Wiles.

56:52 Russ: I want to close with an example that I thought was a really interesting example, then I'll give you one more chance to finish up. Talk about the company that you identify, which is Kiva, because it's an interesting little case study of how technology and humans interact. And it's not to be confused with kiva.org, which is a very interesting philanthropy opportunity that we've talked about on the program before. But this is a warehousing, inventory company. Talk about it. Guest: That's right. So, Kiva, I think is a beautiful example, an iconic example, of environmental control--how do we deal with machines that are inflexible, and make the environment predictable so it doesn't demand flexibility. So the basic problem is companies like Amazon, which now owns Kiva, although it was actually an MIT-based startup, and many other large warehouse companies do a lot of direct consumer sales. They have these massive warehouses--a warehouse will be multi-million square feet; it's too expensive to air condition them. And you have stuff all over the place, a huge variety and number of objects. And so, traditionally, and it sounds sort of funny to say 'traditionally' when you are talking about Amazon--but historically, Amazon employed so-called pickers, basically people who were athletic young people who would run through the warehouses with little computers on their wrists that would tell them where to look for an object; and they would climb up over a shelf; they would grab, they would look at the object and make sure it was the one they wanted; they would grab it and they would bring it up to the front and it would be boxed up. Those people--it's a difficult job, involves running or walking 10-15 miles a day. It's hot and sweaty. And there's no robotic substitute for those people. There's no cost-effective machine that you could employ that would do that job. So what Kiva does--they're idea is, let's reorganize the warehouse in such a way that we vastly reduce the need for a human touch. In fact, Kiva talks about the idea of increasing the value of a human touch. The way this is done is, the warehouse is run basically by the Kiva software. And when goods come in the loading dock, the software is told: here are the objects that are coming in. There are then people who take those objects off the pallets and they put them onto shelves. But they are not ordinary shelves. They are shelves that are driven by little robotic drives. In fact the robots look like those old canister Hoover vacuum cleaners. They drive along the floor, they go under a set of shelves, and then they raise themselves up a few inches and then drive the shelves with them. That's all they do. It's not a very complicated robotic feat. The shelves come, and laser pointers controlled by the scheduling software tell the people, point and say, 'Take this object; put it on this shelf.' So now, the computer knows where the objects are, not because it can see them or recognize them, but because it's told you where to put them. Then the robots whisk the shelves away into the warehouse--or, I should say, not the robots organize them--the controlling software optimizes the warehouse according to the flow of goods. So it's going to put things that are used frequently near the front; things that are used infrequently near the back; things that are ordered together, they are going to put them together. Then as orders come in, the robot drives again go. Let's say I order from Amazon a book, a box of diapers, and a video game. And the robots then go collect the shelves that contain those objects. And they line up for another human picker. And that person stands there, and the shelves drive up to the picker. A laser pointer on the ceiling points to the object that the picker is supposed to pick. The picker takes it off the shelf, puts it in a box, tapes[?] these three objects together, sticks a label on it and sends it off. And then the robots whisk the shelves back to the warehouse. So there are only two points in the system where people are involved in physically handling objects: when they are put on the shelf and when they are taken off the shelf. All the rest of the kind of transportation and sorting of objects is done by the robots. But the reason they are able to do this is because the need for all that dexterity, flexibility, visual recognition has all been pared down to these two points of contact: the loading and the unloading. Everything else is transportation. And so--and if you talk even, basically people say, when are you going to put hands, these robotic hands, that will do what the workers do. And you say, 'That's not cost effective.' Making sensory robotic arms that will deal with all these robotic objects and all their inconsistencies is really, really expensive. That's what people are for. What we do is we take almost all the rest of the routine activity and we mechanize that. And so, what you see there is, one, this environment control occurring, where you've reshaped the environment to make it very consistent and predictable. And, two, this sort of redivision of labor according to comparative advantage. The comparative advantage of machinery is, you know, moving heavy objects in a hot, confined space. The comparative advantage of humans is recognizing objects and handling their inconsistencies in a delicate way. And that is the division of labor that you see occurring in that warehouse.