0:33 Russ: I want to remind listeners that we're doing a survey for your favorite episodes of 2015. You have until January 31st, so please go to econtalk.org and in the upper left-hand corner you'll find a link to the survey. So, please vote.

0:51 Russ: Intro. [Recording date: January 11, 2016.] Much of your contribution in econometrics is related to what is sometimes called selection bias--the people or data we observe may not be like the people or data we don't observe. How much progress do you think we've made in this area? Guest: Oh, I think a lot of progress has been made. I think the additional literature was [?] pointing out the problem, which had been neglected by many economists. Not all, by any means; but it had been a problem that had kind of been swept under the rug and stayed that way for many, many years. And when the issue became--aware, got into the public discussion in economics--two things happened. One was that people became more data-sensitive, and this triggered responses that weren't[were?] purely methodological, that consisted of people collecting better data, which is always a very good thing to do. And second it also suggested something which I think links us closely to economics, which is basically that the selection decision generally, at least if it's self-selection by agents, really involves a lot of economic considerations. And so when we think about things like labor supply, unemployment, even voting or other kinds of questions, choices were there. So it stimulated some work also in linking economic analysis of data to economic choice analysis. So it had these two branches, I think, which are still very active today. Russ: It's a huge problem, though. You give a wonderful example in one of your pieces where you talk about when we try to assess progress made by African Americans over, say, 1950-2000, there's some progress over the first few decades of that period--substantial progress; but unfortunately it appears that much of that progress is measured rather than real, because many African Americans are not in the labor force--are not like the ones who are in the labor force. And so the rise in measured, say, average wages, is the fact that some of the lowest-wage workers are not in the data. Is that correct still? Guest: That's correct. A lot of the biggest source is of course that incarceration, where black males are literally taken out of the labor force. Most black males, many black males anyway who are in prison, are less than high school graduates or maybe just high school graduates or GEDs (General Educational Development) and they tend to be lower-wage people, so that they tend to be ignored in the official statistics that you see reported. But it's not necessarily always in that direction. There's some evidence for example in the last 25 years that in fact if you look at the wages of women and in particular women, what you found, what was being found was actually that increasingly, starting in the 1980s, more educated women were working more. Big growth rate in the labor force participation and employment of women came among the most educated women. And it turns out that those were some of the most highly educated and higher wage women. Therefore some of the growth of the female wages may well be a consequence of the fact that they are getting more educated, women who have essentially higher wages and higher wage potential. So this problem generically affects a lot of social statistics. People want it to go away, but it's there. Russ: Well, they like to ignore it, is what I've found. I'm a big critic of the failure to take account of demographic changes in household structure. And when you are using household data on inequality or home ownership and you have huge increases in single head of households, it's inaccurate to compare over time without correcting for that, it seems to me. And you have people just happily go ahead and do it. Guest: That's extremely relevant today when you think about the way that household inequality is measured. There are two different issues here that show up in a lot of discussions that just get totally confused. The one that you suggest is certainly highly relevant: namely that a big contributor to growth in household income inequality is the growth of single parent households. And we know that those are very, very unequally distributed. A second big faction--I don't know if you want to call it selection bias so much as the definition bias--but some of the most dramatic statistics about the rise in inequality are based not on the same data unit as the household but what are called taxpayer units. Taxpayer units and household units are very different objects. So, being careful about these definitions and making sure that we hold composition of the workforce and whatever we are trying to measure, important, constant is an extremely important question I think and still something that gets easily neglected. Very hard to explain in a single word or two; and I think this gets lost in public discussion.

6:21 Russ: On the topic of wage growth, you mentioned women. It's rather striking how men's wages have been remarkably flat, at least in some data sets; that avoids a household issue. For a long period of time when productivity has been rising, when overall incomes have been rising, per capita GDP (Gross Domestic Product) is rising, male earnings are remarkably flat, at least corrected by conventional measures of inflation. Have you looked at that? You have any thoughts on that? Guest: Yes, I have. I think there's some very interesting work. In fact, it's interesting--this last week I was at the American Economics Association meetings and gave a course on inequality actually with Steve Durlauf--so for three days we met with a group of students, largely faculty and graduate students who were interested in this question. And I listed the latest evidence, which I think is extremely interesting and important, mainly the discussion about the CPS (Current Population Survey) [?CPS-U1-U6, unemployment or underutilization measures] and BLS (Bureau of Labor Statistics) [?]--the deflator and what exactly the true measure of wellbeing is. Russ: Good luck. Guest: Well, I realize it's a controversial discussion. But what's interesting is the following: that if you look at the bottom--take for example a measure that received a lot of attention: Two years ago there were official reports saying that the poverty rate in the United States, in 2014, say, that was the year I think this was calculated for, and you compared that to the beginning of the War on Poverty in 1964, that basically we are at about the same level. Maybe a little lower but the poverty rate was the same. However, people who calculated this carefully and realized two things: One, the progress in the true price of living--the fact that the cost of living has gone down, tremendous growth in quality. If you look at so-called chain-link indices you are going to find substantial growth in real income. And secondly, so not only quality but you also had real reductions in price, and especially among the basket of goods, the so-called Walmart basket that a lot of the poorer people, the less affluent people, would be governed by. In other words, that's the group that is probably the least advantaged among the population. And it turns out there has been substantial progress. And if you add to that additional transfers in programs, changes, the official poverty rate that many economists and some sociologists, even some strong advocates of research on poverty and advocates for the poor, would change the U.S. official poverty rate from 14% down to about 5%. So I think we've made tremendous progress. But it's because it's partly in a matter of dimensions that have to do with unmeasured components. And I think this gets easily lost. And on top of that we have a lot of problems with standard data sets that we use for measuring wages and for measuring a lot of these--even consumption expenditure--are showing increasing nonresponse rates. And that's a real problem. So, when people adjusted these things--and there are judgments made, no question about it--there does seem to be sort of more progress and less stagnation than you hear out there in the public discussion. Certainly in the Presidential debates. But I mean there is an issue, that real income growth have not been uniform across the different levels of the income distribution. Different percentiles of the distribution. But let's go back to one thing you mentioned at the very beginning here, Russ, and that is: you think about selection and you look at the question that the labor force participation rate of males has been declining, even in the prime age, that this common measure that people use, the so-called 90-10, which is comparing the 9th percentile people to the 10th percentile people at the bottom, say, that the content, the composition of those deciles is changing over time. So our comparisons aren't stable. People think of a percentage--5 percentile or even for that matter the median--as referring to a stable group of people. It's not. There are multiple skills and there's been a lot of selection and a lot of the estimates don't adjust for. So I think world[?] is not as pessimistic as what looks to be the case from the unadjusted raw statistics. That's a long-winded story. Russ: It's really important. Guest: I think it's a very important discussion which people just ignore. And I think it's become politically convenient for many people to argue that we're getting declining real wages. I just don't think we are getting declining real wages. I mean, there are a lot of issues, but I don't think the real wage has actually declined. And even people using Census data, CPS data, the Current Population Survey data, recently have not seen declines. But I think if you properly adjust you actually see some real aspects of growth. Russ: When you say people are careless about it--I expect politicians to be careless about it. I'm disappointed when economists are careless about it, because they are--for whatever reason there are a lot of incentives there, either publication or to get attention or to be influential. Guest: Well, I think that's part of it. As you know, in any profession--I guess economists are no different from others--making big, striking statements, something that's dramatic--making a splash is really important. But in this case I think there's a whole group of so-called poverty researchers--people who are focused on income inequality who just established a convention that they are going to decide that a skill level as a percentile in the Current Population Survey, distribution and making no adjustment for the fact that the composition of those people at that percentile has changed--it's a little bit like, you know, the same person should be at the same percentile, at the 5th percentile, and yet, you know, they are treated as, these percentiles are treated as really stable objects. And they are not. And they are not describing the same people. Russ: And it's not just not the same people--it's that they don't have the same characteristics. So just to take the earlier example, household composition at the median is radically different than it was 40 years ago. Guest: Oh, exactly. Russ: And so when you make those kind of comparisons, it's an apples to oranges comparison. Guest: Exactly. And I think that's a very common fallacy, but I've not seen any Presidential candidate--not that I follow them that closely--or any political candidate even discussing that. Even qualified--saying here, the real incomes have gone down. So there's an endless fear of pessimism that actually seems to be governing both sides of the political debate, Republican and Democrat. Russ: Yeah, I agree.

13:30 Russ: I want to switch gears. There's been a lot of enthusiasm for randomized control trials in economics, particularly in the area of development in poor countries. But this goes back decades, as you pointed out, in labor economics, experiments looking at the negative income tax, the effect of training programs. How useful are these techniques and how reliable are their findings? Guest: Well, let me go back, give you the--you are absolutely right. These are, as you know, in social science--and economics is no exception--there are these eternal cycles. People get on those bandwagons; then they get off them [?]; and then they get back on the bandwagon. But they are new people. So, the wagon keeps rolling but the occupants are changing. I remember as a graduate student at Princeton University, it rolling some of the very first participants in the negative income tax experiment. That was an experiment suggested by research by Milton Friedman, suggesting that one effective way to transfer income to the poor, giving people incentives, would be a negative income tax. So this was viewed--there was a woman from MIT (Massachusetts Institution of Technology), a graduate student, who went to mathematics--her name was Heather Ross. She actually was the mind--she and several others--mathematical were the minds behind creating this program and trying to evaluating it. And so what came out of that was an interesting, and actually one of the great legacies of the negative income tax studies was actually modern econometrics or micro-econometrics-- Russ: Yeah, it's true. Guest: Precisely. Precisely because the experiments were so messed up and people did not understand when they were designing the experiments how much choice there was. How much attrition[?] there would be, how much individuals would respond to incentives in ways that weren't even thought about. So, the first round of experiments was generally viewed as a failure. I think John Cogan's testimony before Congress in the late 1970s or early 1980s was the capstone of that failure in the sense that he pointed out all of the variety of estimates and the need for using econometric estimates to adjust for the non-compliance, the self-selection, the nonresponse, and on and on and on. So that all went to rest. Meanwhile, the faithful continued. And there was a large group of people working largely for government consultant organizations, big companies like Manpower Demonstration Research Corporation, [?] still continues. And so there's then a constant faith, despite this surge around [?] what I would call extreme failure. Nobody believed the New Jersey income tax experiments; and in the later Seattle experiments--nobody believed them because they were so heavily compromised by a whole set of other issues. But still there has been this notion out there, which is popular. People understand you toss a coin, you randomly assign aspirin to that one group and no aspirin to the other and you make a comparison. It's so easy. It's so compelling. And it's so misleading in a social context. And I say that--it's not just development[?]. So you get various people who have come along and picked up the banner. You know, Esther Duflo has certainly been carrying the banner forward in development; and Banerjee. And I'm not saying that the experiments don't add to the data sources that we have. But I think there are subtleties. Some of these points I made in a paper many years ago--Angus Deaton--revised some of those points in the context of development. But they really came to this: That people, when you experiment on them, are acting in a purposeful way. The most striking example, I can say, is the recent study about the Head Start Impact Program that was put out a few years ago by, I believe the Department of Education. And the Head Start study, if you look at it, randomly assigned people to Head Start, at least some Head Start centers, and then denied access to others. They didn't deny it permanently. They denied it over a window of opportunities. And so the experiment results are reported; the treatment group really wasn't doing much better than the control group in the experiment. But as people looked at that study, they found exactly what we had found in earlier Manpower studies and the like. And they, namely--what did the control group people do? Well, first of all, the random assignment generally has to be among people who are interested in taking the program in the first place. Okay? So, basically whether it's a job-training program or Head Start, you randomly deny access to people who apply and are accepted. That's the standard. It doesn't have to be--in the drug trials, not necessarily so. Well, what happens is people who are denied access to the drug or the job training program or Head Start, will actually try to find substitutes for it. In some earlier work, we found, during the time when AIDS (acquired immune deficiency syndrome) really wasn't treatable, that when random assignments were made of AID trials to what was thought to be an effective drug for AIDS patients, the patients involved in the experiment, the subjects involved, were so threatened that they ended up sharing their medicine with the control. I mean, they didn't--it was a blind trial so nobody knew who had the treatment, who had the control. But treatments and controls knew each other, and they basically just randomized within themselves: everybody got a share of everything else. They at least got half a loaf [?] rather than none whatsoever. So--and this was certainly true in the job-training programs where people who were denied one job-training program would enroll in another. And there were a lot of substitutes out there back in the 1990s and still today. But in the case of Head Start, which is relevant, there are a huge number of other childcare programs out there. Including other Head Start programs. So, it turned out that a big chunk of the people who were in the so-called control group were also getting in a Head Start program. Or, maybe a program better than Head Start: you know, some substitute they could find. So, again, economic choice theory had its way. And the control group was heavily contaminated by this. And so, a simple treatment versus control comparison was not informative. Russ: Understating the full impact. Guest: Clearly understated. It's like, literally comparing like, I have an apple, I have a Washington State apple here on the left and a Washington State apple here on the right; and gee, there's no difference between apples. Which is fine. But it doesn't tell you the question about whether an apple is a good thing to eat versus nothing. And that's literally what was going on. So, I think what's happened is, this eternal optimism. People understand it: they think they understand it. These other questions are too subtle sometimes. People just don't want to--and they say, 'Oh, here's the experimental evidence.' And the experimental evidence, I think, actually has to be treated with a real grain of salt sometimes. A lot of caution. And people don't. It depends. For example, there were studies that were done in India about the effect of small lending programs. And one group of people introduced into the area, into an area of India, a lending program for disadvantaged people--generally women or small lenders. And the idea is: Do these programs have any effect? And that particular intervention showed no effect whatsoever. But later analysts looked into it and said, 'Oh, wait. It turned out when that program was introduced and to that particular part of India, there were 40 other programs, very comparable, already in place. So, literally there were perfect substitutes for what the treatment was. So the randomized trial was completely compromised by failing to think about the substitutes. There are a lot of issues that arise with randomized trials. So, I think the issue of randomization--it's a good idea: extra source of variation is good. But you've got to be careful. And I think people aren't. And I think that's been a problem with the interpretation of this data.

21:55 Russ: Well, let's look at more traditional techniques. We had Josh Angrist as a guest on EconTalk, talking about what some call the credibility revolution, econometrics' new research designs; other measures to avoid estimation challenges when we apply econometrics to microeconomics. Are you a fan of that literature and those new techniques? Guest: Well, I'd say two things. First of all, the so-called new techniques are not so new. They involve Instrumental Variables, which I think go back to Sewall Wright or his father, Philip Wright, 1928 or so. Secondly, Instrumental Variables have been a central part of econometrics for the last 70, 80 years. So, the methodology of an instrumental variables is not new. I think that the so-called-- Russ: Just to clarify, for non-econometricians: Instrumental Variables are ways to try to control for the worry that causation might run in both directions. Or there's a bias in your estimation because of the complexity of those interactions. Right? Guest: Right. Well, an Instrumental Variable (IV) is, you can think of an experiment like we were talking about as an example of an Instrumental Variable. So, for example, you can think of randomly assigning somebody--forget about the problems with the experiment. What you find is you randomized--if you randomize somebody, what it does is it's neutral between treatments and controls. Any unobservables among the treatments will be balanced with those among the controls. And they are randomized trial; and the randomization assigns some people to a treatment and denies others treatment. That's in an ideal world. That's what an IV does. But it's not just a totally random assignment. It's assumed--and it's a big assumption--that it balances more or less the unobservables, which is the treatment and the controls. But the application of the instrument--you know, it moves people towards one direction versus the other. So, you can think of randomization as just a special occasion of Instrumental Variables. So, yes--sorry not to define it. Russ: That's okay. Guest: But I think this whole idea of the credibility revolution, it's very good; it's good for sales; and I'm very happy to see sales and consciousness [?]. But I think the idea of the credibility, so-called credibility revolution: First of all, I think we have to properly attribute a lot of this basic thrust to Ed Leamer. And the same is both written in 1978. He had a book called Specification Searches in economics where he raised a lot of questions which are still on the table today. In fact, I would say a lot of the work in econometrics about robustness and sensitivity was presaged and pretty well described in that 1978 book by Leamer. Which I think is still available online. But I think the idea of the credibility revolution came from this, that--and I think there is some value in being aware that a lot of conventional econometric procedures--you know, assumptions about linearity, assumptions about normality, assumptions about making distributional assumptions functional form assumptions--did, and were documented to actually change the nature of the empirical work that came from it. And it was very hard sometimes for people to reproduce the findings of one study by some other study. And so it became kind of a cloudy, cloudy world out there. People weren't sure what they were getting from all of these models. So I think there's some general thrust that was true in the whole economics profession, starting mid-1980s, about the time Angrist and others, [?] this credibility revolution. We're starting out on [?] a graduate school--a lot of the previous structural work had really not delivered on its promise. But there was a lot of fragility. So, fragility is the key, here. But unfortunately what I see which was one of the negative sides of this so-called credibility revolution is a lack of interpretation of what's being estimated. I think the goal of econometrics, as opposed to statistics, is to ask economic questions and to answer those economic questions. And I think that means you start with a question; and the question is 'Why am I doing this, and what economic question am I developing? What am I really answering?' Like say, oh, say, Cogan's work. When I change the negative income tax or when I make the incentive scheme to work steeper, for example if I reward work more by paying higher wages or letting people keep more of their earnings, do I get a greater labor supply response? Or do I get less? This was cast in terms of what classical economics would call income substitution effects--you know, moving people toward something, compensating for real income, income effects making people wealthier, buying more goods that are desirable. Unfortunately, the credibility revolution has taken this notion that there's some missing variable out there, some unobservable, and we want to control for that unobservable, to a new level, to a new extreme. So much so that there seems to be an obsession with making sure that we don't have this unobservable contaminating our result, without asking the question of: What is it that we are getting from this instrument? What is it we are getting? So it's kind of traded away, so that--we're more credible, there's less bias; but it's less credible in the sense that we don't know what we are estimating. And so what's happening is that much less use of economics is being made. And as a result it becomes very difficult to use this--for policy purposes, for anything. So the high point of the credibility revolution kind of work or the instrumental variable type of work would be: Suppose that I have a policy, and I impose it in a given environment until literally it's not a random assigned. I impose, say, in one state a certain kind of withholding scheme on a tax payment. Say, Social Security taxes. Say I do this for Georgia but I don't do it for Mississippi; I standardize so that differences of the ethnic, social compositions of those two states. That's going to answer a very specific and useful question: If I impose that tax on a certain group of people whom I study, how much does that tax change their behavior--say, retirement or labor supply or work behavior or unemployment search behavior? That will be a specific thing. But generally speaking it's very costly and very unrealistic to imagine that every policy we are ever going to find or ever be interested in will be a policy that we can exactly replicate. Generally we think that substitution and income effects, these basic economic parameters are what govern responses to policy. And the whole promise of econometrics back in the 1940s when it was really started in a rigorous way here at the Cowles Foundation at Chicago was really to try to uncover the basic economic parameters that govern behavior. So I think that there's been a huge shift away from trying to understand behavior and moving toward statistical artifacts that are hard to interpret as responses to economic questions. So I think the credibility revolution has been somewhat overstated and probably not properly appreciated as really kind of turned focus away from serious economic analysis towards something that I think is more purely statistical.

30:01 Russ: Well, let me take an example that is talked about a great deal, and I want to set it in the context of what I often hear from younger economists. Guest: Yes. Russ: I'll hear people say things like, 'Well, I just looked at the data. I just look and see what the data tell me.' And 'I don't need theory,' or 'I don't want to use theory to bias my understanding of what's going on in the data. And example of that would be the minimum wage debate. So, when I was growing up and when you were growing up--you are a little older than I am but we are both somewhat similar generations on this issue--there was no debate. It was an overwhelming consensus by economists. In fact, what made economists distinctive from everyone else was that we thought that minimum wages had a cost. They hurt employment opportunities for low-skilled people. And Card and Krueger came along; they did a state-boundary kind of comparison you are talking about; and they found very different effects than the traditional econometric literature. And that spawned an enormous literature suggesting that minimum wage effects are either small or even positive on employment. But mostly close to zero, as a lot of people would argue. I wouldn't, but that's what they say. What do you think of that debate? Guest: Well, it's interesting. That's a very good example, Russell. So, let me step back for a second. First of all, I think, you know, some of the most recent work, for example a recent paper that Tom MaCurdy published in the Journal of Political Economy last spring suggested that in fact there are other mechanisms by which firms can respond to higher wages. So, for example, and one way--and work also by a student of Card, actually, at Berkeley is consistent with this in a minimum wage study in Hungary a few years ago--and that is that firms can actually increase prices. So, instead of reducing employment they can actually increase prices. And they can pass it along. It almost depends on the price elasticity--how inelastic demand for the final product is. So, just from basic economic theory, it's not necessarily always going to be a reduction in employment. I mean, that's going to be a force in that direction but there are other ways that firms can respond to higher cost shocks. So that's one thing on the table. The second thing is that if you look at the Card and Krueger analysis and a lot of the subsequent analysis that came from that line of work, I think some of it was fairly casual. To put it mildly. And I don't think the body of work--if you look at some of the work by David Neumark and some of the other analysts who have looked this quite carefully, I don't think that the large thrust of work is actually saying that minimum wages have no effect on unemployment. I think that study in particular had certain issues that were pointed out by Neumark and by others, just what else was going on in those two states at that time. You see, see, from the casual observer, you kind of say, I'm on one side of the Delaware River and you're on the other side, so what you are going to do is say, 'Here's New Hope on one side in Pennsylvania, and then there's a counterpart just across the river, and those should be pretty similar.' But there are a lot of different state policies that are different, and compositions are different. And so it wasn't quite as easy, I think, as people wanted to make from that comparison. So, in this way it sounded very compelling. But as you got to examine it more closely, I think people started thinking, well, maybe there really could be some effects. So, in terms of the minimum wage debate, I think it's still ongoing. I think there are cases, theoretically, where if your firm is a monopsonist, for example, you might actually change employment. That's a classic case that was--Joan Robinson, I think, had that case or some version of it in the 1930s. But I think more generally the evidence does suggest that the structure is one towards increasing costs; and then the costs are passed on in various ways. So I think the debate--you see, what was compelling about that, and I will say it was compelling, if you read the book--was that it looked at the surface to be a very, very nice comparison with like a natural experiment where you had an increase on one side but not on the other side. But also don't forget that another key point that also frequently gets lost is that the range of changes in wages that were being considered in those studies were actually fairly limited. There were fairly small changes in minimum wage. I think when we get to the change of minimum wage for example of coverage of Puerto Rico by the U.S. minimum wage in the 1930s or even now today, we are getting huge increases in the minimum wage where you are moving the bottom of the distribution up to the median, and that I think would lead, I think, any economist, including Card and Krueger, would argue that those would be changes that would probably lead to substantial disemployment effects. What I'm saying is that minimum wages [?minimum wage changes--Econlib Ed.] are not all the same. Some are bigger, some are smaller. So, if I were to tell you that if you smoke one cigarette a day you are not going to be hurt that much, as opposed to smoking three packs a day, I don't think you'd be too surprised. And I think a small change in the minimum wage is not going to have much of an effect. I think that's what the findings have been. And David Card, anyway, when he's been asked on this has said repeatedly that they are talking about modest changes in the minimum wage. Which is different from the parameter of saying what happens if I boost the minimum wage by 50%? There's got to be some response to that. It's just out of the range. And this is the kind of counterfactual--the idea of a policy parameter that we haven't yet seen, except maybe in the case of Puerto Rico, that would be very important to know in designing policy but that a simple available observational study and simple experiments won't track. So that's why I think--I think we really have to be very careful. And again, that's the role of economics, in associating with interpreting the data. So I think that's the part--so, coming back to the credibility revolution, I think the part of the incredibility of the credibility revolution has been its unwillingness to kind of use economic models, even simple economic models. And it's not just--this is not just an aesthetic appreciation. This is a sense of trying to think about how to interpret and generalize. So, the most purely empirical procedure would lead to say, 'I'm going to be purely inductive. I'm only going to look at regularities that I've seen in the past.' But the trouble is the world is changing. It's always changing. And we need to try to extract from the past some behavioral regularities that we can use as a guide to interpreting and analyzing policy. I think that's gotten lost, both the minimum wage debate and the larger issue of the credibility revolution.

37:35 Russ: Let me raise a broader concern that has been a common topic on this program, and recently it was raised in a conversation with Noah Smith. Despite my training at the University of Chicago--and a good chunk of that came at your hands--it's striking to me how rarely econometric evidence is decisive in creating a consensus about public policy or knowledge about a particular area. I'm struck by how easy it is for advocates, whether they are ideological or methodological advocates, to dismiss empirical work as indecisive, flawed--whether it's experimental work, whether it's more traditional econometrics. Where do you think we stand on that? How much progress have we made, say, in accumulating the kind of knowledge that I think both you and I think is the right kind--for the structural relationships that allow us to predict? I'm very skeptical of our ability to do that reliably given the complexity of the world and the kind of challenges you've been talking about. What do you think of that worry? Guest: Well, I think it's a legitimate worry. And it worries me a lot. What I worry about is what I think is more general, not just even about empirical work, is kind of the non-cumulative nature of a lot of work in economics. I'm thinking now more macroeconomics than microeconomics, where we're seeing cycles. In some parts of macroeconomics we are back to the Solow's curve [? Phillips Curve ?--Econlib Ed.], which was supposed to be dead and buried 40 years ago, and it's now alive and well and blossoming in Central Banks and public policy discussions in some quarters. So I think part of it comes not from the fact that it's econometric, but I think some of it comes from the fact that--look at certainly parts of economics is like data. I mean, that's the fact of the matter. So what is offered as a fact, just isn't a fact. So, like in the 1960s when I was a graduate student, late 1960s, you know, people were talking about the instability of the Solow's curve [? Phillips Curve ?--Econlib Ed]. There were things called Lipsey Loops and what was happening was the Phillips Curve was shifting all around, and people knew that that was an unstable object; and there was a gradual awareness of it and some theories were developed to try to explain that instability. Which, I don't think the theories were ever fully confirmed, but they were at least appealing, at least over a period explained some of the stagflation and some of the instability in macro phenomena. So I do think that there is a group of economists, not insubstantial, that sort of goes through the motions of doing careful empirical work in economics, but either lack the data or lack the integrity or some combination of those two, or lack the caution maybe is the right word, not integrity, to kind of put the data in its context and say, 'I really can't say something very strong about this.' And this is true for a lot of models. In macroeconomics and other parts of economics there's a practice called calibration. The calibrated models are models that are kind of looking at some old stylized facts that are putting together different pieces of data that are not mutually consistent. I mean, literally: you take estimates of this area, estimates of that area, and you assemble something that's like a Frankenstein that then stalks the planet and stalks the profession, walking around. It's got a labor supply parameter from labor economics and it's got an output analysis study from Ohio, and on and on and on. And the out comes something--and sometimes a compelling story is told. But it's a story. It's not the data. And I think there's a lack of discipline in some areas where people just don't want to go to primary data sources. And I think you are right, Russell, and it bothers me--that what we know as economists is much more limited than many people carry on as if they know. And so I've become aware of that. Just the humility of knowledge. You know, the old statement by Hayek, this pretence of knowledge question. Which I think is real: which there is a sense in which among professional economists, among professionals generally, you want to solve rigorous, carefully reasoned work, which frequently means highly rigorous formal mathematical models. And yet what people are sometimes afraid to admit is just how rough-edged this stuff is. I don't think it's all bad. I think there is some basic factors that are there; and I think we can learn from them. But I do think that there is a kind of a lack of humility in the face of data. But you come back to--you phrased a really important question about: Should we [?] go back to purely empirical discussion about economics? Should we just have--let the facts speak for themselves? That is a recurring fallacy. And I remember--if you think back--and I don't remember this; I was a little baby or little child in many ways. But back in the 1940s at Chicago, there was a debate that broke out; and it was a debate really between Milton Friedman and Tjalling Koopmans. Although it wasn't quite stated that way, it ended up that way. And that was this idea of measurement without theory. Could you do measurement without theory? Arthur Burns, and for that matter Friedman, were trying to chart the business cycles. Burns and Mitchell [Wesley Clair Mitchell--Econlib Ed.] in particular were trying to chart the business cycle, but using a very a-theoretical solution. Very hard to interpret what it is they had and what its relevance was for predicting the future. So, it let to a big controversy, which continues to this day: Measurement without theory. And so, it's very appealing to say, 'Let's not let the theory get in the way. We have all the facts. We should look at facts. We should basically have a structure that is free of a lot of arbitrary theory and a lot of arbitrary structure. That's very appealing. I would like it. The idea that we have is this purely inductive, Francis Bacon-like style--not the painter but the original philosopher. So, but the problem with that is, as Koopmans pointed out, and as people pointed out: that every fact is subject to multiple interpretations. You've got to place it in context. So, it's not like--I think what it is--see, this is a case of reaction/over-reaction. So there are these rigidly-specified models that nobody believes--I think that's true--somebody comes along and says I'm going to do this, I'm going to do that. And even though that's very appealing and it leads to a lot of ergs[?] of energy and brainpower, nobody believes them. Because they are not robust. But on the other hand, somebody comes along and says, 'See, here's a simple fact,' like we were just saying:, 'wage inequality has gone up.' Right? 'And household inequality has gone up. Therefore the economic system is sailing.' Well, think about what we were just saying earlier. The family is changing. That's a fact. That's a matter of interpretation. Russ: Different fact. Guest: A different fact. But then you ask: Well, why is the family changing? That's a deeper question. But every one of these--there is no such thing as an objective fact, out there. And there have been a lot of controversies; and the literature will continue long past our lifetime. So, people will say, 'Let the facts speak for themselves.' But in fact, the facts almost never fully speak for themselves. But they do speak. So the question is just to be, you know, sensitive to the facts and to interpret them in ways that allow us to be more robust. So I'd say in a lot of areas of economics, we have less knowledge than we think we have. So I think there is a pretence to knowledge. I there's a lot less than we really think and many people think they know. I think--you look at conventions--I remember Dale Mortensen, the famous economist at Northwestern who got a Nobel Prize for his work on search theory, he and I were going to a conference together in Spain many years ago. And I remember, we're on the plane; we had a good chance to talk about a lot of things. And he was reporting estimates of his numbers. And I said, and he was giving some numbers, and I said, 'Well, Dale, you know--I don't know, these numbers.' I said, 'Do you think that if I wasn't running what the club here and I just did my own independent research that I would come up with your numbers?' And he smiled, and laughed, 'No. We all agree that this is this, this is that.' I guess progress is made that way. And I wasn't willing to go along with that progress. But I think one has to have a certain humility. And I just have to know--that, you know, we know a lot less than we think we know. But we do know something. So I think, I think you are right--that in some sense it's better to kind of be aware of the limitations. But in the end we still need a framework of interpretation. So, even Friedman, who was kind of the other side of the Burns and Mitchell, favoring kind of letting the data kind of speak for themselves, Friedman in all of his work used very basic, very sound, very basic economic models to interpret the evidence. Permanent income is a great example. And some of his work, even A Monetary Theory; The Quantity Theory. And so I think every successful body of social science that use basic models, but they've gotten to kind of the core of the idea--not bells and whistles. And bells and whistles are kind of second generation, third generation add-ons. That becomes very, very--professionally and privately rewarding. Maybe not so rewarding for the subject as a contributor to the economic knowledge generally and the public policy. But I do things economists have contributed, I think--I think there was an appreciation, right? After all this effort, but gradually people are starting to use the fruits of the Boskin Commission and some of the other commissions that look at the effect of product quality on the consumer price index level. And I think people are adjusting. There are still arguments about the act magnitude. But I think that was an economic principle, very simple, well documented, and that kind of made its way into the mainstream. So I think we--I mean, if you think about it, 70 years ago, there were no [?] Current Population Survey Data; there weren't the kind of information we have today, dominates the headlines every few months when somebody finds a new fact. You know--wages have gone up, wages have gone down; employment of blacks has decreased; or this or that. So I think we have a much richer data system. But I think we also have to supplement it with an interpretive system. Otherwise we are just going to have blind facts that can be interpreted any which way.

48:47 Russ: Yeah, I worry a lot about the biases we all have and how hard it is to judge those facts objectively when you don't have a theory. And of course we do have a theory. It's just in the background. It would be better, to me, to make it out. Guest: Make it out. But the best way to do that is not necessarily that just everybody agree on one theory. Sometimes it's good to have very competing theories. But stating their views in a very open way, and letting people decide. But sometimes the debate can get very, very complex. And even, like, think of the discussions about derivatives in financial asset regulation; and sometimes, for the average person, even for the average economists who weren't trained in Finance, those discussions can get very technical and they probably can't contribute very well, or even understand it well. There's also another sense, which is that parts of economics are hard, and if we were just a little more careful ourselves and higher standards, I think people might be willing to defer more than they do now. Larger than an average person out there in the world, to economists, if we had a little more internal policing about what we did within specific fields. Russ: Yeah. I'm curious: you talked about the Hayekian--I think of it as Hayekian humility. Guest: Yes. Yes. Russ: Has that changed over your lifetime, in your career? For me, when I was younger, I was a lot more confident. I was pretty sure that, as I've said before here, I was pretty sure that my guys did the good studies; and the other side had the bad studies. And it was a very painful recognition at one point to realize that actually, the other side actually cares deeply, is trying their hardest; and they suffer from the same cognitive challenges my side suffers from. And their data is flawed, and our data is flawed, and our models are not so robust. Is that a big part of your mindset for a long time or is it something that you've come to as you've gotten older? Guest: Well, it's--I think that's a general process of aging. If you do empirical work as I do and you get into issues, you inevitably are confronted with your own failures of perception and your own blind sides. And I think--I think the profession as a whole is probably better, much better, now. I mean the whole enterprise is bigger to start with. You are getting a lot of diverse points of view. And the whole capacity of the profession to replicate, to simulate, to check other people's studies, has become much greater than it was in the past. I think the big development that's occurred inside economics, and it's in economics journals and in the professional--that if people put out a study, except for having those studies based on proprietary data--that many studies essentially have to be out there and to be replicated. And it's literally been the kiss of death for people not to allow others to replicate their data. And that's a good sign. I think that's a really good sign. I don't think that would have been true 50 years ago to the same extent it is now. So I think the whole profession's into replication, into basically trying to [?] closely, to look more deeply at what other sides are going to--so I think we're in a better position to actually check each other. And I think that's a major improvement. So I think that's been a drift independent of my private aging. But I also have this sense--when I was younger, I certainly think--don't forget, I grew up--I came of age, if you will--in the late 1960s. And at that time there was a hubris. I think it was a hubris, that I think was more centered in macro than in micro. And it certainly influenced my thinking, though, that you could basically control the business cycle and you could use econometric models to predict a lot of things. And then all that started unraveling within my lifetime, even my early lifetime. And people started questioning: Can we do this? The original Klein-Brookings model that was put out in the 1960s, I remember as a graduate student reading that it had all these equations, more equations and more parameters and it had instrumental variables, than it had any kind of credibility. And its systems weren't even mutually consistent: the equations. And then the final blow came when a friend of mine, a professor, a little bit older, ended up working for Klein. And pointing out: how did Klein's model predict so well? Because when Klein got predictions from his econometric model, he would then adjust it, using his insights. Russ: Yeah. There you go. Guest: So, it wasn't like some triumph of econometrics. This was basically a triumph of Klein's common sense. So I think the whole profession probably went overboard in the 1960s and 1970s maybe about the ability of economic models to predict. And I think that led to the backlash that now we think of as the credibility revolution. And I think that--yes, I think we've all come to recognize the limits of the data. But on the other hand, I think we should also be amazed at how much richer the data base is these days--how much more we can actually investigate. For example, we can look at aspects of time use surveys. We can look at aspects of surveys of individuals incarcerated. We can look at the trends in areas. We have theory, details, scanner data which allow us to look at transactions at stores--individual transactions--to identify kind of what quality changes are, how to adjust price indices. So, even though we've got a long ways to go, I think we've gone a long way, too--way back from the 1920s and 1930s when there were almost no U.S. aggregate economic or microeconomic data. Now we have a large body. So I think the empirical side of economics is much healthier than it was, before--I mean long before, going back to the 1920s and 1930s. That was just a period with no data. So I think we have a better understanding of the economy than we did. And I think that's still there. And I think we have better interpretive frameworks than we had out there. And I think understanding the non-market sector, thinking more broadly about demographic trends--within--and appreciating them. I think these are things that we shouldn't underlook, overlook, here, understate where we've come from. We've come a long way.