Rob Wiblin interviews Tyler on *Stubborn Attachments* (BONUS)

In this special episode, Rob Wiblin of 80,000 Hours has the super-sized conversation he wants to have with Tyler about Stubborn Attachments. In addition to a deep examination of the ideas in the book, the conversation ranges far and wide across Tyler’s thinking, including why we won’t leave the galaxy, the unresolvable clash between the claims of culture and nature, and what Tyrone would have to say about the book, and more.

If you liked this interview, be sure to subscribe to Rob’s 80,000 Hours podcast, which features in-depth interviews about the world’s most pressing problems with Philip Tetlock, Bryan Caplan, Nick Beckstead and many more.

Listen to the full conversation

Read the full transcript

TYLER COWEN: Today’s episode of Conversations with Tyler is, in fact, a conversation with Tyler with myself as the victim. And we have here to interview me, Robert Wiblin, who is one of the interviewers I most respect, and indeed, envy.

Robert is director of research at a nonprofit called 80,000 Hours, and their mission is to figure out and then communicate to people how they can do the most good with their careers. Robert is a long-standing leader in the effective altruism movement. He runs an excellent podcast called the 80,000 Hours podcast. And he is from Adelaide.

Now with all that, we’re here to discuss, among other things, my latest book, published by Stripe Press, called Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals. But of course, in the tradition of these interviews, Robert is free to range to wherever he wants. Robert, thank you for coming on.

ROB WIBLIN: Thanks so much. It’s a real privilege to be able to interview you. I think you’ve had, perhaps, more influence on me than any other writer. I’m sure I spent thousands of hours reading Marginal Revolution over the last 10 or 12 years.

COWEN: Thank you.

WIBLIN: In fact, I think it was reading Marginal Revolution that prompted me to switch into studying economics when I was an undergraduate, so you’ve had a pretty substantial influence on my career. Or at least you sped it up.

Just to be clear, as for all these episodes, this is a conversation with Tyler that I want to have, not necessarily the one that you want to listen to. And we’ve got perhaps enough questions here for two interviews, so we’ll be trying to move pretty fast.

First, I always open episodes of the 80,000 Hours Podcast with the question, what are you working on at the moment, and why is it really important work?

COWEN: I have started a new project called Emergent Ventures, which is a new approach to philanthropy. The idea of Emergent Ventures is to create a philanthropic fund which will support projects that are maybe too weird or too small or too foreign or have results that are too hard to measure to be accepted by other major foundations.

So people are applying to Emergent Ventures. The final decision maker is myself. There is a minimum of bureaucracy. There are no layers of approval people must go through for me to see the proposal. And we are just now starting to hand up grants. Think of it as a kind of pop-up philanthropy.

WIBLIN: What are the key selection criteria that you’re going to use to judge who gets the money and who doesn’t?

COWEN: It should be people who are smart, have good values, really believe in what they’re doing, and have been at it for some while. But also, just that they’re doing something unusual that will be falling into the nooks and crannies.

We’re getting plenty of good proposals in, but my reaction for some of them is, “Well, you’re going to get support for this elsewhere.” And those are some of the ones we’re turning down.

WIBLIN: So you’re really going hard on the neglectedness criteria.

COWEN: That’s correct. And overall, people, where we feel we can change or alter the trajectory of their careers or speed it up, is something we’re looking at quite closely.

WIBLIN: I think, in the effective altruism community, we tend to give top priority to making sure that people are working on a really pressing problem, a problem that has a really huge scale that plausibly you can solve, and also that it’s neglected by other people.

Do you think it’s more important to focus on finding opportunities that other people aren’t funding or to make sure that people are working on problems where they can have the largest impact?

COWEN: If you talk to venture capitalists in the Bay Area, where we’re chatting, they tend to focus much more on the person than the project.

For Emergent Ventures, I think we need both the person and the project. But I still take a person-first approach. If you don’t have the right person, the project simply cannot come off. So first and foremost, you’re trying to figure out who’s talented enough to do something new.

WIBLIN: And throughout your career, how much has doing good guided your choice of what to work on?

COWEN: Very little. I’m quite a selfish person, I think, and I enjoy pursuing my own curiosity. Part of me at the meta level hopes that does some good, but I don’t think altruism is really, for me, a fundamental driving force. I enjoy absorbing information and communicating it to other people. And that’s, for me, what is fun.

WIBLIN: It’s interesting how much good people can do incidentally. All right, let’s move on to the book, Stubborn Attachments, which I guess is coming out in . . .

COWEN: October 16th.

WIBLIN: This is a really fascinating book because it covers many issues that I’ve been thinking about for the last 15 or 20 years that I think people should spend a lot more time thinking about, like time. How should we think about the long-term future? How should we be aggregating welfare and outcomes between different people? Should we be following rules or just considering every case individually? How should we deal with the massive uncertainty about the effects of our actions? Should we respect human rights? And how much should we defer to common sense morality versus thinking things through for ourselves?

It’s especially interesting because I agree with you on so many points where I think other people don’t, but then we slightly diverge at the conclusions about what we actually want to do, practically. So take it away. How would you summarize the key messages of this book? And how did you come to write it?

COWEN: The underlying message of the book is simply, we’re capable of making rational judgments about what is better for society. In my own discipline, economics, there’s a long-standing thread of skepticism about that. Kenneth Arrow developed an impossibility theorem. There are a lot of results that imply you can’t say much about what’s actually better.

So this book is a synthesis of economics and philosophy, and it’s trying to argue to both economists and philosophers, but also ordinary readers, there is such a thing as what is objectively good. It is based on the idea of supporting economic growth. That’s the one thing that, over time, we can say is much better than the alternative of not having as much economic growth.

A lot of the philosophical arguments are directed toward how should we think about the future? Is something less valuable simply because it’s far away in time? And I think you and I agree on this point: that we should be much more future regarding. And then the book thinks through, if we treat the more distant future as just as valuable as the near future, what does that imply for our actual decisions?

And that again, to me, brings us back to this point that we ought to be maximizing the rate of sustainable economic growth. It’s a very different normative standard than what you get from, say, Rawls, Nozick, Parfit, or others.

WIBLIN: If I had to pick out 50 words from the book that summarized it, I would choose this quote from page 32, which is, “We can already see that three key questions should be elevated in their political and philosophical importance. Namely: number one, what can we do to boost the rate of economic growth? Number two, what can we do to make civilization more stable? And number three, how should we deal with environmental problems?”

Does that seem like a key quote to you, as well?

COWEN: Absolutely.

WIBLIN: Let’s talk about a couple of the key ideas. One is the Crusonia plant and compounding growth. What would you talk about there in the book?

COWEN: The Crusonia plant is a somewhat obscure reference. It’s taken from the works of Frank Knight, University of Chicago economist. Knight postulated there was such a thing as a Crusonia plant. It was a hypothetical. It would simply keep on growing forever, so it would be of very high value. It’s like an apple tree. Seeds fall. You get more apples. Those seeds, in turn, get you more apple trees, and so on and so on.

That’s exponential compounding growth. So if you don’t discount the future at a very high rate, if you had such a thing as a Crusonia plant, it would be very valuable.

Then if you ask, “What, in fact, is a Crusonia plant?” It’s a modern, well-functioning economy that does generate more output every period as something like an exponential rate of growth, and it’s highly valuable. So we want to cultivate, again, economic growth.

WIBLIN: Why do you think the economy grows in this way where shocks or improvements seem to be permanent or at least semipermanent?

COWEN: People generate new ideas, and most new ideas don’t disappear. You can lose a new idea or have a Dark Ages. But if you have good institutions, you build upon those new ideas. Also, you can have increases in labor, supply, and capital. Think of those as some key sources of economic growth.

WIBLIN: I remember when I studied time series econometrics, we would sometimes look at series of GDP, and you could sometimes see that they were unit root, which is to say, at least in some cases, it would seem like they had a unit root — which is to say that any shocks to GDP seemed to be permanent.

Then we just kind of moved on from that. And I was like, “Wait a minute. Doesn’t this imply that any kind of small perturbations that we have today will have effects that could last hundreds of thousands of years if this persists, and therefore, it could be of enormous moral importance?”

And I guess you were following through on this logic and saying, “If it is the case that improvements to productivity or whatever else do last directionally long times, then it could have enormous moral significance.”

COWEN: And we seem to see in the data that countries that have done well several hundred years ago — that has persistent effects, even ranging up through today. There’s even one paper suggesting how well a region was doing in the year 500 has predictive power for how well it’s doing today.

WIBLIN: And that’s probably not only economic or financial issues, but also cultural.

COWEN: Absolutely. I think of culture as one of the keys behind economic growth in fact.

WIBLIN: Some listeners might be listening to this and thinking, “Well, yeah, it might be the case that if we grow GDP today, this will also increase GDP in a thousand years time. But I don’t really care about a thousand years in the future.” What would you say to try to convince them that a thousand years in the future does have important moral significance?

COWEN: Well, imagine our ancestors sitting around, say, a thousand years ago, saying they didn’t care very much about us. And they were willing to accept a growth rate, say, a percentage point lower than what has been the case for the last thousand years.

We would all, right now, be in extreme poverty. We would be suffering. Life expectancy would probably be something like 40 years of age. We wouldn’t have created a lot of artistic and cultural wonders. So there’s a plurality of values that’s supported by economic growth. And that’s the most fundamental thing we should be willing to endorse at a macro level.

WIBLIN: Why do you think that the long-term future Earth, after you and I have ended our natural lives, why could that be very valuable, all of that?

COWEN: The future Earth can support so much wealth, so much diversity, so much prosperity, liberty, aesthetic values, whatever we hold dear, our carrying capacity for doing more of that by having more productivity, better governance, better institutions. There’s so many possibilities out there in the future. We just need to actually bring them about.

WIBLIN: If you had to take for granted the expected number of future humans that might live in the future, could the cost be very small? We could go extinct very soon. Or it could be very large if we lasted a very long time. Would you want to venture a guess for the expected value? Is it trillions, trillions of trillions?

COWEN: I’d rather have a pocket calculator. I do think population will stabilize and start to decline, though probably not by very much. If you take Earth with, say, an average of 10 billion people lasting for centuries, but not lasting for 50,000 years, and do the calculations, you’d get at my modal prediction.

WIBLIN: But there’s many more of them than there is of us.

COWEN: That’s right.

WIBLIN: Do you think that quality of life will also go up, with high probability?

COWEN: Not forever, but for the foreseeable future. We’re in the period right now. We’re doing more to improve living standards than the world ever has before. That will have big ups and downs, but I don’t see why it has to stop.

WIBLIN: What about people who say, “I don’t care about welfare that much, I care about other things.” Do you think this argument for long termism goes through for them as well?

COWEN: I think it does. I have some early books, some of them on the arts, that argue wealth is good for aesthetic values. It depends what other values people care about, but wealth supports many different opportunities.

The whole point of wealth is to enable a kind of diversity and choice within a framework where, if there’s some other thing that people value, we can have more of that too.

WIBLIN: Do you think, like me, that there’s a chance that a future technology could make human life just a hundred or a thousand times better than it is for people today?

COWEN: I don’t know that we have a meaningful metric for saying that, but I suppose I don’t think that’s possible. I think we can make it twice as good and quite a bit longer, but I don’t think it will be inconceivable to what we can imagine now.

WIBLIN: What about if we imagined that we find a way for people to take the best, most enjoyable drugs that they can take today without having negative effects on their brain in the long term? It seems like that could result in a life that’s 10 times better than what people typically experience today, at least in some narrow sense.

COWEN: I think if you’re a pluralist, that life is maybe not better at all. It has more pleasure, but these other plural values seem to be weaker because you’re pursuing only pleasure. So that may be a dystopian scenario for a true pluralist.

WIBLIN: Yeah, I suppose. Well maybe we could push it out in that way on all of these margins. You get many different things, but we do that much more efficiently.

COWEN: Some people specialize in drug taking, I’m fine with that if it’s not harmful, but I don’t want the whole world to become lotus eaters.

WIBLIN: A lot of economists and people from finance are used to applying discount rates, which causes them to think that, once you’re a hundred years out, consequences don’t seem to matter that much. I guess if they applied that backwards, it would mean that Tutankhamen thousands of years ago was perhaps, individually, more important than everyone who’s alive today.

What do you have to say to people who are used to applying discount rates and use that kind of logic to discount the future?

COWEN: Discount rates are very useful for many purposes, especially if you’re looking at a small project done in a single firm and you’re trying to estimate how valuable to you is a cash flow, say, 20 or 30 years from now.

But if you’re asking a question for all of society and if you apply a discount rate of 5 percent, 7 percent, whatever it’s going to be, you can end up with the results that say, “Well, a nickel today is worth more than saving the existence of the entire world hundreds of years from now.”

That’s so counterintuitive. I think that, just as we do not discount the well-being of people who are distant from us in space per se, nor should we do so across time.

If you’re asking a question for all of society and if you apply a discount rate of 5 percent, 7 percent, whatever it’s going to be, you can end up with the results that say, “Well, a nickel today is worth more than saving the existence of the entire world hundreds of years from now.” That’s so counterintuitive. I think that, just as we do not discount the well-being of people who are distant from us in space per se, nor should we do so across time.

WIBLIN: You have this neat example of what if people were able to travel close to the speed of light and how this even sharpens the counterintuitiveness of using a discount rate. Do you want to describe that example?

COWEN: If one accepts Einstein’s theories of relativity — which do seem to be true, albeit incomplete — as you approach the speed of light, the whole universe, to you, becomes like a frozen block of space time that, in a sense, you can watch or observe from the outside.

In that sense, time can be thought of as a kind of illusion, and simply all of the things are happening at once. From that perspective, you would wonder, “Well, why is time such a special dimension?”

But another way to put the point is to imagine we’re sending astronauts up into space, and we’re sending them away at speeds approaching the speed of light. They will, if they return, return in the much more distant future, but to them, only a small amount of time has passed. So should we take less care to preserve the well-being and lives of those astronauts simply because we’re shipping them away at higher speeds?

Again, it seems like an absurd conclusion. The notion that when you have large benefits for people that matter at the macro level, that you should apply something like a zero rate of discount to well-being — not to financial flows, but actual well-being — I think that’s the correct moral position.

WIBLIN: As far as I know, there’s almost no moral philosophers who support discounting purely the welfare of people in the future. But then a lot of people throughout society, I guess influenced by economics, don’t share that view.

What do you think’s going on there? Is this potentially one of the biggest moral mistakes that we’re making, and philosophers should be shouting about this much more?

COWEN: Philosophers and economists should be shouting about it much more. I think some of the problem is a political one. I find it relatively easy to convince a lot of philosophers the moral rate of time discount should be zero, but relatively hard to get them to accept the practical implications of that, namely, that ongoing economic growth is a very, very positive thing — say, more important than redistributing income.

Economists who tend to be more market-oriented — even economists somewhat to the left are more market-oriented than most philosophers — they’re so trained that there’s some embedded positive real rate of interest in an economy that to then get them to accept a moral framework, we are talking about well-being and macro effects and long periods of time.

To think in those terms and come up with the right answer of zero — that’s the problem there. But economists tend to be pretty enthusiastic about economic growth because they study it, and they see its benefits pretty clearly.

WIBLIN: If people aren’t convinced by this argument . . . You were just pointing out to me that there’s a new book published called Time Biases. Have you managed to read that yet?

COWEN: I’ve read about half of it. It’s quite good. It’s by Meghan Sullivan. She’s a philosopher at Notre Dame. She spends a lot of time talking about a paradox I don’t consider much, which is how much you should discount the past.

If someone told you, “Well, you have amnesia, but you had a very painful operation and it happened to you a month ago,” how much should you care? And then someone says, “No, it wasn’t a month ago. It was two months ago. It was more distant in time.”

Is that better for you? Is that worse for you? She explores the symmetry of paradoxes of discounting forward in time compared to those with discounting backward in time.

WIBLIN: You then go on to point out that, if we’re very concerned about the long term, and we have the option of creating economic growth, this potentially allows us to sidestep some of the aggregation puzzles that have really worried philosophers and economists. What was your argument there?

COWEN: The classic aggregation puzzle in economics is simply there’s a policy — some people are better off, some people are worse off. How can you possibly judge if the policy is worth doing? There are hardly any what we, as economists, call Pareto improvements, policy changes that make virtually everyone better off.

But over a span of a few generations, if you have a higher rate of economic growth, people today in today’s wealthier world are really, as a whole, obviously much better off than, say, people in the 18th century or even the 19th century. That’s an aggregation judgment we can make. I think it would be supported by people’s demonstrated preferences as to where they want to migrate.

And there are some obvious moral facts that, if the standard of living is, say, three to five times higher in one society rather than another, the wealthier society is better.

WIBLIN: Okay. You then go on to argue that, against the things that some people have said, wealth actually does lead to happiness. So we’re not just creating wealth for its own sake, but actually it’s going to increase welfare. What’s the case for that?

COWEN: If you look at the data within nations across classes of income or wealth, wealthier people are simply much happier than poorer people. There is a partial paradox: When you look at data across nations, you find a lot of poorer countries where people report they’re pretty happy. But I think what’s going on there, often, is they’re just using words differently.

For instance, if you polled Kenyans, “How happy are you with your health care?” Kenyans actually polled as being pretty happy with their health care. It’s not that Kenyan health care is so much better than we all think. They’re just used to a lower standard. So I think when you ask people about happiness across countries, there’s still a positive slope on that relationship. But you’re understating just how good wealth is for people.

Wealth also helps keep people alive. So all these polls — you’re only polling the living, not polling the dead. If you could poll all the dead people who passed away because new medicines were not invented for them or they took a riskier job and they died in an accident, put all those people into the poll. Again, wealth is going to do much, much better.

WIBLIN: I’m not entirely sure how to interpret that well-being literature. It seems potentially still an open question just how much wealth increases happiness today. But I feel like you almost didn’t make the strongest argument that you could make here, which is that, even if increasing GDP or wealth today doesn’t make people happier now, at some point in the future, it will, once we use that wealth and that greater knowledge to figure out new technologies that can turn our wealth into welfare.

COWEN: Oh, of course. But also, even wealthier countries today, even if you don’t think the money makes us happier, we buy more exports from poorer countries. So the growth of China, India, many parts of the world has relied on the wealthy countries having so much to spend and having technologies to transfer. And people in those countries are clearly much better off because of the economic growth they’ve had lately.

WIBLIN: This is a slight aside, but many people, I think, have this intuition that rich countries are harming poor countries, on balance, by absorbing investment or the smartest people from them. But I, and I think you, believe that poorer countries benefit a lot from having richer countries on the earth with them. Do you want to make the case for that?

COWEN: There’s very strong evidence of positive complementarities, a lot of it coming from technology transfer — using medicines or electricity, just general production techniques, management being transferred to poorer economies.

I do think for some small island economies, there is a brain drain. They’re still better off that, say, the West, Japan are wealthy rather than poor. There’s not a general brain drain that we see in the data. But if you’re a very small place and people just leave and don’t come back, you can, through migration, be worse off. It doesn’t mean you’re worse off because the West is wealthy.

WIBLIN: In the book, you also speak up in favor of rules over deciding how to act based on each individual situation. What’s your argument there?

COWEN: There’s a long-standing paradox in philosophy about rule utilitarianism versus act utilitarianism. So if you follow a rule, when should you deviate from the rule? And I point out the rule of maximizing sustainable economic growth. There’re just very high costs from deviating from that rule. So if you simply adopt the attitude, “We’re not going to deviate from this rule. We’re going to stick with the rule,” you’re better off.

It’s also the right thing to do. There’s a consilience across deontological and utilitarian approaches. And I think the case for rules is somewhat underrated in the literature. They don’t have to be bled away by a million different exceptions. You’re just better off following the rule.

WIBLIN: I don’t quite understand that. Is that just because you might make mistakes each time you’re trying to reassess whether you want to follow the rule of maximizing economic growth? Or is there something more fundamental going on here?

COWEN: Each time you deviate from the rule of maximizing economic growth, you’re costing the more distant future, really, an enormous amount of resources. So it’s a very high price. My argument is just a way of making vivid, “How high is the cost of deviating from the rule?”

WIBLIN: But if it is so important for welfare to increase economic growth, won’t act utilitarianism also endorse that same behavior?

COWEN: It may very well. There may be this broader consilience of act and rule utilitarianism. But thinking of it in terms of a rule we’re bound to follow, I think . . .

WIBLIN: It won’t cost that much at all.

COWEN: Right, yeah.

WIBLIN: Yeah, maybe the reverse. Okay.

I guess summarizing the above, my view is that I’ve characterized you as kind of a total global, objectivist consequentialism, plus respectful of a nonaggression principle. Do you think that’s a decent summary?

COWEN: That’s very close. I would say respect for human rights. The human rights may or may not always be defined by the nonaggression principle. I think, for the most part, they are.

WIBLIN: Yeah, we haven’t talked about human rights yet. What’s your case in favor of human rights?

COWEN: Almost all of the book is focused on the consequentialist arguments for growth and thinking about the more distant future. But I add in a caveat, which I don’t discuss at length, and that is I think there are some things you simply ought not to do, even if it would boost long-term economic growth.

So if someone were asking you to slaughter some number of innocent babies after torturing them, that’s simply wrong to do. And that’s more of a caveat in the argument. It’s not something I explore much in the book, but I think there are objective rights which people hold, and we should respect them. And we maximize growth within that framework.

WIBLIN: So what fundamentally is the philosophical argument in favor of human rights?

COWEN: It’s intuitionist, I think, that there are simply some acts that are so horrible, a person who doesn’t see them is horrible. It would be hard to have further discourse with them.

But I don’t think I have much to say about rights in this book. That’s just a kind of placeholder saying to people, “Look, if you believe in rights as I do, you don’t have to run roughshod over humans to maximize growth in every instance. You have this out. The door is open. Take the out when you need to.”

WIBLIN: What are some occasions when your conception of human rights comes apart from the nonaggression principle? Because most of the examples I think you give in the book are cases where you’re using violence against someone.

COWEN: There might be cases where you’re letting people die. And by letting them die, you’re not committing formal libertarian aggression against them, but they might have a human right that you’re, in some way, obliged to help them. I leave that as an open question, but I don’t think we should be restricted to formal libertarian aggression as our only conception of human rights.

WIBLIN: You also talk about the extraordinary level of uncertainty about the future and how even quite trivial acts that we take could potentially change the entire course of history. And indeed, it might be probable that they change the course of history. Do you want to explain the case there?

COWEN: There’s an old Ray Bradbury short story about backwards time travel. You go back in time, you step on one butterfly, and the whole later history of the world changes.

So every time you stop at a traffic light sooner than later, you remix the timings of people’s days, when they start acts of intercourse, which children are conceived. Does Hitler end up being born or someone else, kind of a near-Hitler, but who does not become Hitler?

So every single thing you do, including our discussion, remixes the future course of world history. If you’re a consequentialist, you need to take that seriously. You need to ask, “Does this simply make my entire doctrine incoherent?”

The stance I take in the book is, if you’re pursuing this truly large significant grand goal of making the future much, much better off in expected value terms, that will stand above the froth of the uncertainty you create by remixing things with every particular decision.

WIBLIN: This reminded me pretty strongly of the nonidentity problem from Derek Parfit, who you once wrote a paper with about discounting rates. Do you want to describe the nonidentity problem?

COWEN: Derek Parfit, in his 1984 book, Reasons and Persons, had an example that, say, you would bury nuclear waste, and several generations from now, the waste would, say, kill millions of people.

But the fact that you buried the waste would change the timings of subsequent conceptions. So the people who are being killed a million or, say, a thousand years from now, they wouldn’t have been born otherwise unless you had buried the waste.

You could argue, “Well, I haven’t harmed anyone at all. By burying the waste, I caused them to be born.” They die of a terrible cancer when they’re 27 years old, but on net, this is still following the Pareto principle.

I think I have an argument why that’s wrong, namely the case where you don’t commit the very harmful act. You might have different identities of people, but you’ll have a much greater aggregate of good in the more distant future. And it’s not about individual identities. So there’s something a little oddly collectivist about my argument, you might say.

WIBLIN: I guess many people have something like this intuition that, yeah, if you change the identities of people in the future, such that you can’t see any correspondence between them in the two different scenarios, then perhaps it doesn’t matter exactly what you do because there’s no significant person you can identify who’s worse off. Do you think this is a very strong counterargument to those views?

COWEN: Here’s a tension I think that we all have to face up to. Parfit talks about something called the person-affecting principle. How does your action affect some particular person?

But if you’re willing to make aggregate judgments and engage in an active aggregation, saying some kinds of societies are better than others or some policies are better than others, there’s something in the micro foundations of that judgment that’s fairly nonindividualistic.

People want to be consequentialists, and they want to be pure individualists sometimes. It’s not actually a fully happy marriage of views. And the notion that, once you jam together different measurements of well-being, you’re making a collective judgment about the overall course of history even slightly Hegelian. You could also think of this book as a Hegelian defense of liberty.

WIBLIN: I think I slightly messed up the explanation of the person-affecting issue there. Because often, what’s going on is people want to say, “If you change the number of people in the future, that can’t necessarily be good or bad because there’s no specific individual in both cases who is worse off or better off. The people who exist in one scenario are not in the other. They’re only there in one case, so you can’t say that they have a higher welfare than a specific person in the other scenario.”

But then when you point out that, “Well, almost everything you do is going to result in there being no correspondence between the list of people in one scenario and the list of people in the other scenario. So it doesn’t matter whether you bury this nuclear waste that’s going to greatly harm people in the future in one case.”

People tend not to like that conclusion. That just seems very counterintuitive and wasn’t really what they were aiming for. So you’re going to take the total view here, where we just sum up the consequences of our actions on lots of different people.

COWEN: Subject to rights constraints. Yes.

WIBLIN: Do you think there’s any plausible alternative to doing that?

COWEN: I think the most plausible alternative to my view is simply to say the actual time horizon is not very long, that maybe in an extreme case, either the world will end soon or history will start collapsing and run in reverse. So there is no grand, glorious future that has a heavy weight in the calculation. And thus, we’re always dealing with the here and now, a quite pessimistic view.

I think that’s the main rival view to what I put out in the book. I don’t feel I refute that argument. It’s going to be true with some probability, right? If you do an expected value calculation, well, retrogression is true with, say, probability 37 percent, progress with 63 percent. In the expected value calculation, progress is still going to win. It will have the dominant weight. But we need to be very careful. Don’t assume progress is possible.

And the other pessimistic theory of history as, say, a lot of the ancient Greeks would have accepted, may well be true. And if that’s true, then this vision is a kind of large mistake. But you cannot live with pessimism, right? There’s also a notion that more optimism is a partially self-fulfilling prophecy. Believing pessimistic views might make them more likely to come about.

WIBLIN: Yeah. In the seminar room, it seems like economists and sometimes philosophers are not willing to aggregate welfare across people. If one person’s worse off then someone else, you just won’t be able to say whether, overall, the situation is better. But then it seems like in everyday life, when they’re making calls about what should a group of people do, they’re always willing to aggregate.

COWEN: Absolutely.

WIBLIN: That’s the immediate argument that they’ll always turn to. What do you think’s going on there?

COWEN: We’re always willing to choose a restaurant, right? Even if it’s not the first choice of all people. And the aggregations we make when looking at economic growth are often quite large differences in income. But I think there’s a fear amongst a lot of philosophers that if you’re too willing to aggregate, utilitarianism becomes too strong and they don’t like all of the consequences of that decision.

So they try to draw the line at aggregation. But as you mentioned, I think that’s grossly inconsistent with how we treat instrumental reason in our lives, in our businesses, in our nonprofits. We aggregate all the time. We’re wrong a lot, but the judgments are not completely outside the bands of reason, either.

WIBLIN: Something that’s even stranger about that argument, to me, is that, if one person is made better off in a scenario and someone else is made worse off, it’s not the case that it’s forbidden to do the thing where one person was made worse off. It simply makes it incommensurable with the other scenario.

So you simply can’t say whether it’s better or worse, I think, on this view. It doesn’t lead to the conclusion that I think people want, which is that you should be unwilling to harm one person to benefit a large number of people elsewhere. You simply can’t say whether it’s better or worse, and so all bets are off, and indeed, it’s, in a sense, permissible. Has that argument occurred to you?

COWEN: Sure. It’s always instructive to look at how people behave as parents or maybe how they vote in a department when they’re dealing with their colleagues. And they’re some form of consequentialist in all those cases. If you take the intuitions they’re using in these smaller decisions and just build them up onto a larger scale, I think the logic of consequentialism is very, very hard to escape.

And when people say, “Oh, I’m a deontologist. Kant is my lodestar in ethics,” I don’t know how they ever make decisions at the margin based on that. It seems to me quite incoherent that deontology just says there’s a bunch of things you can’t do or maybe some things you’re obliged to do. But when it’s about more or less — “Well, how much money should we spend on the police force?”

Try to get a Kantian to have a coherent framework for answering that question, other than saying, “Crime is wrong,” or “You’re obliged to come to the help of victims of crime.” It can’t be done.

WIBLIN: Yeah, it’s a somewhat boring moral vision where we’re just prohibited from doing a bunch of stuff, and then doesn’t really have much more to say.

COWEN: That’s right.

WIBLIN: Let’s move on a bit from the stuff where we see things basically the same to some areas where we have a somewhat different view.

You say early on in the book, “If you’re the kind of reader that I want, you’ll feel I have not pushed hard enough on the tough questions, no matter how hard I push.” So I’m going to try to push you here and take that to heart.

COWEN: Great.

WIBLIN: In the book, and, I guess, here so far, you’ve been focusing overwhelmingly on the importance of increasing economic growth, kind of getting to a better future faster. When we’re talking about growth here, we might imagine time on the X axis and welfare being generated in the universe on the Y axis, and you want to increase that faster.

Why focus on increasing the rate rather than making sure that that doesn’t go to zero?

COWEN: Well, keep in mind the core recipe is the rate of sustainable economic growth. If it’s going to go to zero, you’re knocked out of the box. So you’re maximizing across both of those dimensions, and I think, empirically, there are a large class of cases where more growth and more stability come together.

National defense is the easiest way to see that. If your society stays poor, someone will take you over. And those who take you over are probably nasty and will harm you. It’s not the only way in which growth and sustainability come together. But at most margins, they do. So there’s a wide enough class of cases where we can do both things at the same time.

I would note that earlier versions of this book — you know, I worked on this for about 20 years — the earlier versions had much, much more on existential risk, and it took me years to cut those out. I never repudiated any of the ideas. They just came up in enough other books. I felt I wanted to stick to my core notion of growth more than existential risk and stability.

WIBLIN: Okay, that’s answering some of my questions. Because in the book, you write, “Policies that prioritize growth at breakneck speed are frequently stable. The average civilization endured only 400 years, and this number appears to be declining. Our path in the future requires a tightrope act, balancing progress and stability along the way. And we should believe that the end of the world is a terrible event, even if that collapse comes in the very distant future. Similarly, the continual persistence of civilization 300 years from now is much better than having no further civilization at that time.”

But then so much of the book is dedicated to economic policy and how would we increase growth rather than focusing on this other word, sustainability — what are the biggest threats to sustainability in the future? And you’re just saying it’s been done elsewhere, so you wanted to focus on the growth.

COWEN: Richard Posner was one of the first books on this. When Posner’s book came out, I immediately started doing a lot of editing on mine. You and many other people in the effective altruism movement have written on existential risk, and I endorse most of that. But just at the margin, it seemed to me growth was underestimated.

I think that one of the main, if not quite existential risks — but it’s a risk to ongoing growth — is environmental issues. And I think there’s plenty we could do for the environment.

COWEN:… There’s just environmental issues and I think there’s plenty that we can do for the environment that also boosts growth. So cutting done on air pollution has made people healthier, more productive, easier to live in cities. As China cuts down on air pollution, say, in Beijing, it will make Chinese society more productive.

It would be more of a problem for the argument if you thought growth and stability were always at loggerheads. But there are large numbers of societies that collapse because they don’t grow enough. They can’t fend off, say, drought or weather problems or problems in their agriculture in world history, or they’re conquered by someone else.

WIBLIN: Do you think that still applies today?

COWEN: If the Unites States stopped growing, I feel a lot of free countries in the world would collapse or be taken over, or they would become unfree. If we grow at a very low rate, our budget will explode It will cut back on our discretionary spending, our ability to advance science to protect the world against an asteroid coming. So, yes I absolutely think it applies today.

WIBLIN: I think I agree that if the US stops growing that would be very bad, principally because of the cultural and political effects that that would have and perhaps that we’ve started to see over the last five years.

But doesn’t that suggest that we want a sufficiently high level of growth? One that keeps people happy and looking forward to the future and being willing to accept some negative shocks because they know that things are going to get better in the future anyway? And that we don’t necessarily have to go from 4 percent GDP growth to 8 percent GDP growth — that’s not necessarily going to make things more stable.

COWEN: You’re talking about going from 4 to 8 percent. You may or may not think that’s stabilizing, but the actual reality is, we’re in the midst of one of our most wonderful labor market recoveries, there’s been a big fiscal stimulus. And year on year, we’re doing 2.7 percent, which is very poor compared to our past performance.

You see a lot of recoveries where we grow at 4 percent or more just to get back to where we were. The growth engine has slowed down. There’s a lot of evidence — some of which I present in my other books — that technological progress has slowed down.

It doesn’t seem to me we’re close to the margin of growth being so fast that we’re thrown off the track. We have high level of debt in deficits, and we don’t know how to pay it off. And we’re cutting into our future capabilities with infrastructure and military defense, many areas, science.

WIBLIN: Inasmuch as you’re focused on economic growth in order to increase sustainability, it seems like a slightly odd focus, at least for an individual to take, if they wanted to increase sustainability because there are already such strong incentives that many people face to try to grow the economy because they earn money from it. They earn either labor income or returns from starting a business.

If a single person wanted to maximize sustainability of human civilization, would you recommend that they focus on economic growth? Or do you think that there’s more leveraged opportunities if they want to set aside making money?

COWEN: It depends on the person and what kinds of talents they have. But as I argued in my earlier book, The Complacent Class, there now seem to be so many people who are simply satisficers. They’re not very interested in innovating or even participating in a dynamic economy, and they just try to do well enough.

I’m here making a moral argument that at the margin, many, many people should be less complacent and take more chances. Personally, I will lower aggregate societal risk and do more to innovate, save more, work harder, in some way be more dynamic.

You can think of this and Complacent Class as two sides of a bigger picture. Complacent Class is like the sociology of what we’re doing and this is the moral side.

WIBLIN: It seems like another technology that you might be very interested in that could have big effects on the trajectory of human civilization, and potentially avoid extinction — although it also could be very negative — would be the capacity to redesign human motivation and our personalities through genetic engineering. We could potentially select our children such that they are, say, very pacifist, such that they don’t want to kill one another.

If you could get large take-up of this technology, that could potentially lower existential risk and get it very close to zero and give us more brighter prospects of surviving for a long time. On the other hand, the ability to redesign human personalities such that we’re so passive and will just accept dominance would potentially, again, facilitate totalitarianism and a very stable bad or neutral state. What do you think of that?

COWEN: I don’t think we know yet how genetic engineering will affect existential risk or even long-term growth. We don’t, at the moment, as you know, have the capabilities really to do that. As we develop them — if we do — we might have a better idea, but I think most people should be deeply agnostic and also somewhat worried about genetic engineering.

If you think we’re on an okay civilizational trajectory right now relative to the human past, and then we’re going to have this other major event, possibly more important than nuclear weapons, and our current trajectory is okay relative to the more distant past, probably we should be more worried than cheering.

That would be my take. But given that we don’t have it in front of us, it’s hard to say. It might all work out wonderfully.

WIBLIN: I guess you would think that, as that technology gets closer, it would be important to have people thinking about, “How do you regulate that? How do we make it applied well rather than badly?” That kind of thing.

COWEN: Of course we should think about that, but I’m not sure we’ll succeed in regulating it very well. There are many countries. Parents, I think, are willing to go to other countries. There will be black market versions of the technologies.

The regulation might fall to a least-common-denominator standard. Whatever we can do, I suspect we’ll end up doing one way or another, so I wouldn’t put too much faith in, “Oh, we’ll regulate out the bad versions and be left with the good ones.” We’re going to get some mix of the very good and very bad.

WIBLIN: Yeah, you’re always just pushing on the margin, trying to make it a bit more likely that we’ll use it in good ways and a bit less likely that we use it in bad ways.

COWEN: Yeah, sure.

WIBLIN: Are there any technologies that you can foresee over the next few hundred years that you think could end up being very important or could put humanity on a different trajectory, in the same way that perhaps nuclear weapons could have done that during the 20th century?

COWEN: I think changing the nature of human beings. You mentioned genetic engineering, but also just drugs. The opioid epidemic has grown much more rapidly than almost anyone had expected. We had long periods of time of technological stagnation in drugs because many of them were illegal, but that also means there’s a kind of low-hanging fruit.

Now, there’s more people can do in their own labs because of information technology. So one of my worries is that bad drugs get too much better too quickly, and we have many things like opioids that we can’t control, and that becomes a much bigger social problem.

Just the susceptibility of people to alcohol. We take it for granted, but so many lives are lost each year, so many careers ruined, so much productivity lost. One of my personal crusades is, we should all be more critical of alcohol.

People will pull out a drink and drink in front of their children. The same people would not dream of pulling out a submachine gun and playing with it on the table in front of their kids, but I think it’s more or less the same thing. To a lot of liberals, the drink is okay and the submachine gun is not. I think, if anything, it’s the other way around, and I encourage people to just completely, voluntarily abstain from alcohol and make it a social norm.

People will pull out a drink and drink in front of their children. The same people would not dream of pulling out a submachine gun and playing with it on the table in front of their kids, but I think it’s more or less the same thing. To a lot of liberals, the drink is okay and the submachine gun is not. I think, if anything, it’s the other way around, and I encourage people to just completely, voluntarily abstain from alcohol and make it a social norm.

WIBLIN: If we’re able to design better and better addicting substances, drugs or, perhaps, computer games or whatever else, it’s kind of the case that the Mormons will inherit the earth, or whoever is most resistant to those temptations and still wants to have children, even despite the fact that they can just shoot up on heroin.

COWEN: That’s right, so I try to encourage the productive people I know at the margin to be more Mormon, right?

WIBLIN: You mean have more children?

COWEN: Well, that too.

WIBLIN: Or avoid drugs?

COWEN: Avoid addictive substances of the wrong kind. Work, too, is an addictive substance, right?

WIBLIN: It seems like there’s a difficult tightrope here because we both want people to, in the short run, focus on growth and improving civilization and so on. But then we don’t want to lock in this value that it’s bad to experience pleasure because ultimately, we want to cash it out in something, which could involve using heroin or some much better future form of heroin.

Do you think it’s going to be possible to have a culture that supports that delicate balance?

COWEN: If you could have better drugs, but they didn’t destroy people, and they became the new intermediate incentive, like, “Innovate a new product, become a millionaire, and then you can afford to buy this truly wonderful drug that will be great on Sundays and won’t hurt your productivity.” That seems unlikely, but who knows?

WIBLIN: What is your vision for the long-term future? Do you see it as we’re going to have growth and then some kind of plateau? Or going to go up and down? Or will it just continue rising forever?

COWEN: I don’t think the rate of growth will rise forever. My view of economic history is that growth comes in spurts. It’s not an evenly managed process, though it was for part of the post–World War II era.

You have a thing called general purpose technologies, one of those being fossil fuels plus machines, which became significant in the 19th century, and then you have the big growth spurt. You do everything you can, say, with fossil fuels and machines: you get cars, you get planes, electricity, powerful factories. But at some point, your cars only get so much better. And then you wait for the next big breakthrough.

The next set of big breakthroughs may well involve the Internet, artificial intelligence, Internet of things. They are not quite here yet. You see many signs of them. They don’t yet make the growth rate much higher, and then you will have a big period of explosive growth and then a slowing down again. That’s my basic model.

WIBLIN: I was thinking less about growth and more thinking of just the absolute level of the economy or welfare in the universe, in the future. Do you think at some point it’s just going to level off because we’ll have done everything we can? We’ll have grabbed all of the matter we can access, and we’ll have figured out the best configuration for it to produce value. And at that point, it’s just a matter of milking it for as long as we can.

COWEN: No, I think the world will end before that happens.

I think at some point, there’ll be a new phase where we can directly make people in some way happier or more fulfilled or be more the people they want to be by manipulating something inside the brain. We do that in very crude ways today with antidepressants or even Viagra — not manipulating the brain, but it seems to make people happier.

That will be an enormous breakthrough of sorts. It’s not right before us. I don’t even think it’s the next breakthrough, but it seems at some point it will be possible.

Once we exploit that frontier, it seems to me the game will be about numbers — just having more very happy, very fulfilled people, and we’ll turn our attention to making higher numbers sustainable. I don’t see any obvious limit to that process. I do think the world will end before we complete that process, I don’t think we’ll ever leave the galaxy or maybe not even the solar system. But at some point it will just become a numbers game.

WIBLIN: Why do you think that we won’t leave the galaxy? And also, even if you think that that’s improbable, just given the fact that almost all of the potential value that we can generate is outside of this galaxy because that’s where most of the matter energy is. Shouldn’t we be pretty focused on that possible scenario where, in fact, we do leave the galaxy?

COWEN: I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.

Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.

WIBLIN: What do you think is the probability that neither humans nor some kind of successor species exists in a hundred years or a thousand years or ten thousand years?

COWEN: A hundred years, I think it’s extremely small. It would be whatever is the small chance of some kind of galactic catastrophe, very small.

A thousand years, I think there’s at least a 10 percent chance. Not that every single human is dead, but that we’ve returned to some earlier, much poorer stage that’s quite destructive. And maybe the earth is ruled by roving bands which are violent, a kind of Mad Max scenario. It seems to me, the chance of that is reasonably high, way too high.

WIBLIN: If you’re saying that there’s a high probability that humans will still be around in a hundred years, I guess that suggests that you think that’s a very low annual risk of nuclear war? Why is that?

COWEN: I’m not sure what you mean by very low. I think it’s below 1 percent.

WIBLIN: Yeah, I think so too.

COWEN: I don’t know if that counts as very low, but again, it’s going to happen sooner or later. And how stable is it if you just trade one nuke back and forth, two countries? We don’t know, right? It’s never happened. I think the chance that that happens within 30 years is easily, say, 5 percent?

How destabilizing it will be? Do you have an immediate global financial crisis? Or do markets just react like, “Yeah, yeah, yeah.” Some currencies go up, some go down. It’s a terrible tragedy, but for most of the world, kind of-sort of life goes on after these terrible tragic deaths. We don’t know.

WIBLIN: Someone came to me, and they were asking for advice, as they sometimes do, on what can they do to improve the long-term future? They were deciding between increasing economic growth and, say, working to prevent a nuclear war or great power war between the U.S. and China.

I would almost always recommend that they work on the latter because I feel there are far fewer people who are working on that problem, so it’s substantially more neglected. What would you have to say to them?

COWEN: I definitely recommend people working on lowering the risk of nuclear war. One of my dissertation advisors was Thomas Schelling who, of course, is the classic theorist of nuclear war. Nuclear weapons, to me, are always the number one issue.

But that said, even if you sat down and said, “I’m going to do my best to limit nuclear war,” I don’t know what that means operationally. If you’re a president or in a parliament or maybe if you had a particular nonprofit, but I’m not sure disarmament is the answer.

Whereas to boost the rate of economic growth, there’s plenty that most people can do in that direction. I wish we had more good avenues for lowering the risk of nuclear war. I’d be very keen to hear about them. We’d actually be keen to support them with the Emergent Ventures fund.

WIBLIN: It seems like, given what you’re saying, that it’s likely that humans will go extinct before we manage to escape from this galaxy, or maybe even from the solar system. And that the reason for this is that, primarily, we’re going to be unable to coordinate between countries and individuals to prevent conflict that would destroy us.

Your top priority would be figuring out ways to coordinate humans better, and indeed that is a really high priority for people in the effective altruism community and many people who are working with this long termist framework elsewhere. Do you think you that you might want to write a book about how to improve coordination and international cooperation in the future?

COWEN: Maybe. That may not be an issue that’s good for a book, of course. Some issues you write about but not necessarily in book form.

It still seems to me that education is a net positive for coordinating people and limiting their desire to slaughter each other. I understand it’s not always the case — a lot of the Nazis were well educated and so on. But still, on net, I think it’s a positive force.

Growth and education tend to come together. If we’re growing more, we can afford more education, we can do more to support education in poorer countries. So I still think economic growth is at least a partial, indirect means to some of those ends. Again, it’s something that’s easy to concretize. You can, to some extent, measure it. You know when you’re failing. And that makes it more useful than some other kinds of advice that maybe I still would truly fully support.

WIBLIN: I guess I viewed the invention of nuclear weapons as perhaps the most important moment in human history. Just look at around that time . . .

COWEN: I hope it ends up not being the most important moment in human history.

WIBLIN: Fingers crossed. Around that time, say, during the ’30s and ’40s and ’50s, the Soviet Union under Stalin had incredibly fast economic growth. People were moving from farms to factories. The Soviet Union was becoming substantially more powerful and a stronger military power and developing the ability to, in the future, build nuclear weapons of its own.

Do you think watching that in the ’30s and ’40s, we should have been glad that the Soviet Union had a fast rate of economic growth? Or should it have, on balance, concerned us? Both because it would potentially lead to more conflict between countries because you have more great powers, and also, because the person who was leading the Soviet Union was not a very nice guy.

COWEN: Of course, it should have concerned us, but on net, it was obviously a huge plus because the Soviets stopped the Nazis. But keep in mind also, my wife and daughter were born in the Soviet Union and grew up in a wealthier society. My father-in-law, who still lives with us — he was alive during the time of Stalin, and his life was better. He’s still alive today because Soviets had a higher rate of economic growth.

Soviets urbanized, probably more rapidly than China has done lately — that’s not a well-known fact. The world discovered a lot of talent through that urbanization and people being brought into formal education.

So it had a lot of benefits since Stalin didn’t wipe us out and he beat the Nazis. If you’re looking for any case where a higher rate of growth had a big pay off, I think it’s that one. That’s not the counterintuitive case.

WIBLIN: I feel like, ex post, it definitely looks good, but at the point where the Soviet Union got nuclear weapons, I might have said, looking back, I wish that it had not become wealthy that quickly. Because now we have a nuclear standoff, and in 1948 or 1949, you don’t know how stabilized that situation was going to be. Looking forward you might think there’s really a very substantial probability of humanity destroying itself during the Cold War.

Now looking back we can say, “Well it wasn’t so severe.” But you might have thought, actually it would be better if there was just one country — given that we have nuclear weapons, what we really want is just that one country that’s going to a hegemon and dominate the world so that it won’t be a nuclear war, and we can kind of have permanent stability. What do you think of that?

COWEN: I don’t think we understand stability and nuclear weapons very well. Do keep in mind the two times they’ve actually been used is when only one country had them. It doesn’t mean we have a fully general theory there.

Nuclear weapons have spread, actually, at a slower rate than many people have expected. You read geopolitical theorists after the end of World War II — a lot of them think there’s going to be another nuclear war really soon, and we tend to dismiss them like, “Oh, those silly people, you know, they were just paranoid.” But maybe they were right, and they got lucky, and that’s the true equilibrium. I don’t think we should reject that view.

That gets back to an underlying issue with a lot of claims in the book. If you really think the chance civilized society might end or be defeated quite soon, you can’t look to any kind of long-term horizon to decide what is better, and you’re left with a kind of brute deontology for making choices. When that’s the correct scenario, it’s not about growth maximization, so I would accept that caveat.

WIBLIN: There’s this interesting thing that, if you think that the risk of extinction is extremely low, though nonzero, then you should place extremely high value on the future because it is an expectation it’s going to last a very long time, and we have a high chance of colonizing a significant fraction of the universe, so that saves an argument for long termism.

On the other hand, if you think that the risk of extinction is actually quite high — perhaps like 1 percent a year or something like that — then it’s true, if we managed to avoid extinction this year, then the benefit that we get from that is not so great because there’s still a good chance that we’re going to destroy ourselves in the future.

But the risk is so high, like 1 percent every year, that there’s probably a lot that could be done to lower that. So it’s potentially a more tractable problem because it’s a bigger problem to begin with?

COWEN: Yes.

WIBLIN: Do you have any thoughts on that?

COWEN: Let’s imagine it were the case that somehow we actually knew that, if we could construct hobbit society, but with people being taller, say, the world would not end. And if we don’t construct hobbit society, the world will end, say, through nuclear weapons. Let’s say we knew that or we thought 70 percent chance that’s likely to be true.

I still don’t think we actually are good at implementing the means to bring about hobbit society. We would have to become brutal totalitarians. If anything we might accelerate the risk of this nuclear war.

So when you think of the feasible tools at our disposal, that’s kind of outside our current feasible set, hobbit society. We’re on this path, I think we have to manage it. We can’t just slam the brakes on the car — it’ll careen off the cliff.

Our best chance is to master and improve technologies to make nuclear weapons, warning systems, second-strike capability, safer rather than riskier. I just think that’s the path we’re on, and the hobbits are not there for us.

WIBLIN: Perhaps if I had to summarize my overall world view in just one quote, it would be this quote from E. O. Wilson: “The real problem with humanity is the following: We have paleolithic emotions, medieval institutions, and god-like technology. And it is terrifically dangerous.”

This highlights my concern with the idea that we ought to increase economic growth which seems to push more on the god-like technology than on improving the paleolithic emotions or the medieval institutions. By focusing on improving technology, we’re increasing the disconnect between the improvement that we’ve had in our engineering and scientific and technological ability and the fact that our personal and moral values and our institutions for governing ourselves have not kept up with that.

So I’d be perhaps more interested in seeing people focus on the emotions and institutions here to get them to catch up with our god-like technology than increasing the technology itself. What do you think of that?

COWEN: I’m more optimistic than Wilson and perhaps you. He refers to medieval institutions, but in most countries institutions are much better than that. What are the good medieval institutions that stuck around? Like parliament of Iceland? Oxford, where you’ve been? I suppose Cambridge? Maybe a few other schools, but we’ve built so much since then. I don’t mean technology. I mean quality institutions with feedback and accountability.

If you look, say, at how Singapore is run, a lot of the Nordic countries, some parts of American life — by no means all, just to be clear — Canada, Australia, where you’re from. You see remarkable institutions, unprecedented in human history, I don’t take those for granted they’re not automatic. But I think one has to revise the Wilson quote and be more optimistic.

WIBLIN: Yeah, so medieval institutions is perhaps and exaggeration. But do you think . . .

COWEN: But it’s a significant exaggeration, right?

WIBLIN: Okay, I think it’s the case that probably political institutions and our decision-making capacity isn’t improving as quickly as our technological capabilities. And I wish it were the other way around that our wisdom and prudence and ability to make decisions that are not risky was maybe moving faster than our technology.

COWEN: But see, I see it the other way around. If you look at data on economic growth, you see huge productivity improvements: China, India, basically free-riding on existing technologies, not usually making them better. It’s just managing companies better, having better incentives in companies.

If the world economy grew 4-point-whatever percent last year, way more of that, say, 4.8 percent, is coming from better management, better institutions than is coming from new technology. Maybe 1 percent of it is coming from new technology and the rest from better management — in some cases, growing population, capital resources.

So institutions are way out-racing technology right now. Again, I’m not taking that for granted, but I think people would be much more optimistic if they viewed it in that light.

WIBLIN: We’re both big fans of Philip Tetlock and his work . . .

COWEN: Sure, me too.

WIBLIN: . . . in superforecasting. I think it’s among the most important work on social science and some of the most impressive and interesting work that’s ever been done. Do you think it would be valuable to get much more effort going into improving decision-making in that form? Rather than perhaps working on science and technology otherwise? Do you think we underinvest, sometimes, in social science relative to technical sciences?

COWEN: I think we underinvest in particular kinds of social science. Too many social scientists are overly specialized. They don’t read outside their disciplines. They don’t have the incentives to do something the world, as a whole, will find useful. Tetlock is a wonderful, shining counterexample to that.

Academic incentives are working less well than I think they did 20, 30 years ago, including in the social sciences, and that we need to fix. It will be very hard to do because existing structures are tightly locked in.

WIBLIN: Let’s take a step back in time, back to 1900. I imagine that, if you were alive then, you would say that the risk of human extinction in the next hundred years would be fairly low.

But then in the ’40s, we had the shock where we developed nuclear weapons, and suddenly, I think we would both agree, the risk of human extinction or the collapse of civilization went up quite substantially because, for the first time, we had the ability for one person or one country to basically wipe out most humans alive at the time.

What do you think are the chances that, in the 21st century, there’ll be some new breakthrough that’s analogous to nuclear weapons that will, again, give a level-shifting annual risk of human extinction?

COWEN: The possibility that worries me the most is simply an equivalent amount of power being more portable. I don’t think it has to be a new technology. It certainly might be, but simply the cost of a nuclear weapon or something like it being much cheaper. Bio weapons — they’re very hard to carry around and deploy, but you can quite readily imagine that becoming easier. It seems to me those are likely outcomes.

But terrorism in general, I don’t think we understand well, so after 9/11, people thought there would be many more attacks. You could ask questions, “Why don’t they just send a few people over the Mexican border? They get here, they buy submachine guns, they show up in a famous shopping mall and they take out 17 people. They don’t get any further than that, but it’s a massive publicity event. And this just happens every two or three weeks.”

A priori, it almost sounds plausible, but nothing like that has happened. If anyone has done that, it’s our native, white Americans who are not, in the traditional sense, terrorists. It’s clearly possible, but they don’t do it. So when you ask, how likely is someone to do something pretty horrible with a pretty cheap decentralized, highly destructive technology? We don’t even see them acting at the current frontier of destructiveness.

What you need in terms of people who are competent enough, motivated enough, coherent enough, have a base to operate from. How hard is it in a combinatorial sense for all those to come together? We don’t know, but I think thinking about it more, you become a little more optimistic rather than less.

WIBLIN: I think that’s fair, but it seems like over time, as our technology gets better, the number of people and the amount of expertise and the amount of security that you would need in order to pull off an operation like that is going down and down and down. Eventually, it could end up being a handful of people or even a single individual, and perhaps breakthroughs in biology are the most likely cause of that.

Did you think, perhaps, that the annual risk of human extinction is going down or up? There’s varying factors here, and I guess, the improvement of technology in that sense is one thing that’s pushing it up. Though, I suppose we could also invent technologies that might give us the ability to prevent that from ever happening.

COWEN: I think it will go up over the next century, I don’t think it’s going up right now.

I once asked some of my friends an interesting question: If a single person, by a sheer act of will that they had to sustain for only five minutes, could destroy a city of their choice, how much time would have to pass before one individual on Earth would take the action to destroy that city? Is it like it would occur in two seconds, it would occur in 10 minutes, it would occur within a year? I don’t think we know, but no one should be optimistic about that scenario.

WIBLIN: Let’s say that humans do continue for thousands, perhaps millions of years, but for some reason, we decide to never leave Earth. So we don’t use the resources that are available elsewhere.

COWEN: Which would be my prediction, by the way.

WIBLIN: Okay.

COWEN: I think space is overrated.

WIBLIN: Okay. It seems that, in your view, that should be a horrific tragedy, that almost all the value that humanity could have created had been lost in that case.

COWEN: Space is hard, right?

WIBLIN: I’m not so sure, but go on.

COWEN: It’s far, there are severe physical strains your subject to while you’re being transported, communication back and forth takes a very long time under plausible scenarios limited by the speed of light. And what’s really out there? Maybe there are exoplanets, but when you have to construct atmosphere, there’s a risk diversification argument for doing it.

But simply being under the ocean or high up in the sky or distant corners of the earth, we’re not about to run out of space or anything close to it. So I don’t really see what’s the economic reason to have something completely external, say, to the solar system.

WIBLIN: It seems you’re okay with the idea that we can turn more matter and more energy into more value. So what is it? Five times by 10 to the 22 stars out there in the accessible universe at the moment? Literally, as the galaxies recede, it’s declining by about a billionth per year.

But if you’re in favor of growth and creating more value, it seems like almost all the value . . . No matter what you value, it has to be out there in all of that matter that we can reorganize. Given your desire for growth on Earth, I don’t understand how it could be the case that you wouldn’t be upset that we might just stop at the boundaries of Earth’s atmosphere.

COWEN: Oh, I’m upset about it, I’m just not very optimistic. If you put me in the legislature, I’ll vote to increase funding for space exploration. But relative — especially in the Bay Area — relative to other people I speak to in this kind of fringe group of intellectuals who think about space, I’m more pessimistic than just about all of them.

But it’s also that I’m more optimistic about the earth. The ocean of course is enormous — it could be platforms, it could be underwater. Deserts, places that can be terraformed, cities in the sky — you do want diversification, protection against a big nuclear war. Maybe for that you need other planets. There’s the moon, there’s Mars — they’re actually big enough to have diversification.

WIBLIN: But it does seem like no matter how hard we go on Earth, at some point we’ll have found the best configuration we can make for all of the matter and the energy that we can harvest here. And then in order to continue growing and avoid a plateau, which is terrible in your view, the only path is to go out.

There’s this beautiful thing, that once we go out into space and we start colonizing, then we get cubic growth because we’re growing like a sphere.

COWEN: I’m never going to vote no on that, but just some cautionary notes: There is a history of imperialism, where mostly European societies have grown and taken over other parts of the globe, and they did not in every way do maximum good, to say the least. I worry about how we might treat societies we encounter. We also may draw attention to ourselves as a target or a threat.

I’ll still vote yes on the expenditures, but I don’t view it, by any means, as this huge net positive. It’s something I also worry about a good deal, and I also think our corner of the galaxy will be wiped out before we get that far.

WIBLIN: It seems like, to make this view stable, you’re thinking that the probability of extinction is high, such that you’re pretty confident we’ll never go to space, but it’s not high enough that the overwhelmingly important thing is to work on extinction right now. Or maybe you do think what we should do is lower the risk of a catastrophe, but the best way to do that is via increasing the growth rate.

COWEN: Yes, and let’s say your modal scenario is everything ends in 10 thousand years. That’s still a long enough time horizon where the long-term results of higher growth now are very significantly positive for billions of humans. That will play the dominant role in a moral calculus.

But the idea that somehow we’re going to be sitting here three million years from now, and I’ll have my galaxy and you’ll have yours, and we’re not even human anymore. It’s not even recognizable as something from science fiction. I would bet against that if we could arrange bets on it.

WIBLIN: Why do you think that we couldn’t develop kind of self-replicating probes? I agree that humans are not going to travel to other galaxies — that’s way too hard. But at some point, we should be able to create intelligence that somewhat resembles humans — or might even be better than humans — in some form that’s easier to transport through space on computers or whatever the future example of computers is.

That kind of intelligence would have a much better shot at spreading to the stars. It can travel much faster, it’s much more resilient, and then it arrives there and starts creating more copies of itself.

COWEN: We don’t see self-replicating probes from other parts of the universe. Now maybe we are those self-replicating probes, in some way, right? We were superseded. But the fact that we don’t, in an obvious way, see them, to me strengthens the case for pessimism.

WIBLIN: Yeah, but you probably have read this paper that came out of the Future of Humanity Institute.

COWEN: Yes, the Fermi Paradox is not nearly as absolute as people used to think, but it’s still an issue. You still should update, in Bayesian terms, that you don’t see the aliens.

WIBLIN: Yeah, I agree that’s worrying, but because we have these alternative explanations that we might just be the one chance event where life began, I still had some hope that we’ll get there, we’ll be the first ones to colonize at least this part of the universe.

COWEN: I’m going to vote with you, that’s all I can say for now.

WIBLIN: Cool. You’re in favor of markets and kind of liberal governance but I see two arguments here that might justify an alternative approach. One is, you’re in favor of faster economic growth. Essentially, planned economies in the past have been able to reinvest a much larger fraction of GDP and future growth just building more factories rather than producing consumer goods than market economies have been able to.

So perhaps, you might be interested in having some greater central planning of the economy that would allow us to do much more investment through science and technology or perhaps physical capital that will allow us to increase the economic growth rate? What do you think of that?

COWEN: We need to become more concrete, but the wealthiest societies in today’s world are, for the most part, the freest ones. There’s no guarantee that will always hold, but I think that’s an argument for some kind of liberal freedom.

But if you look at, say, China since 1979 — yes they grew because they became significantly freer, but I suspect they also did better keeping some elements of Communist Party rule in place than if they had, say, followed the advice of western reformers. And I think for the most part — not on every decision — but did the right thing. I think we need to recognize that.

But that’s not a centrally planned economy either — it’s because they gave up central planning. But nonetheless they spent very heavily on infrastructure and still do, and that, in large part, comes from the government.

WIBLIN: We don’t want to give up the benefits of market, absolutely, but, I guess you’d be fairly happy if the United States spent quite a lot more in science and technology research and perhaps the government built lots more infrastructure?

COWEN: I would spend much more on science and technology. When you say infrastructure, I want to disaggregate, but there are certainly plenty of things I would be willing to spend more on.

But the idea that you just throw a trillion dollar bill at infrastructure and what ends up happening is the senators from Wyoming have their say, and you just build a lot more roads and actually make climate change worse. And you don’t upgrade your power grid or do things very smart. I don’t want to just uncritically endorse infrastructure. That, to me, can be a negative.

WIBLIN: Yeah, I’m with you. Perhaps another argument for lower liberalism would be — you’re saying that, basically, you think there’s a high chance that humans are going to drive themselves to extinction. The reason is a lack of coordination in conflict . . .

COWEN: And cheap energy.

WIBLIN: Okay, and cheap energy. Too much power in the hands of people and the ability to destroy one another. This is a very severe problem. Perhaps, in order to solve that problem, we should be willing to have a world government. Kind of run towards a singleton, as Nick Bostrom calls it, which would be like having one decision-making process that is able to control everyone else, prevent conflicts.

Even if it doesn’t produce the optimal decisions, at least we won’t have extinction. We’ll be able to survive for a lot longer and generate some more value even if the singleton doesn’t make the absolute best decisions that we might think of from a liberal point of view. What do you think of that argument?

COWEN: It’s hard enough to get the European Union to stay together, and those countries have so much commonality of interest. I expect some further nations, after Brexit, to peel off over time. Try to get Southeast Asia to agree even to a local ASEAN being much stronger, being an EU-like phenomenon — simply impossible. It’s a recipe for creating conflict.

I understand the appeal of the vision. I’m all for NAFTA. I like multilateral institutions, but I think it’s the wrong way to go. The UN is of some use but in many ways an impotent bureaucracy. You would not want it ruling over us. You tend to recreate some of the worst aspects of national bureaucracies and then infuse them into a least-common-denominator sort of politically correct institution that’s just not very effective. So I think that’s the wrong path overall.

WIBLIN: But can you imagine that, perhaps, we convince many people of this kind of long termist framework. They share our belief that extinction is very possible and will be a terrible catastrophe. So they’re willing to make many concessions perhaps.

If you can get China and the United States to band together to say, “Our top priority is avoiding extinction and war, so we’re going to work together very closely.” Not to control everything, just to control access to the kind of technologies that would potentially produce human extinction. Could you see in the next 100, 500 years that kind of cooperation to make humanity more stable and civilization continue?

COWEN: There’s already a great deal of international cooperation on nuclear weapons. Right now, we’re trying to manage the North Korean situation. Cooperation is highly imperfect, but it’s remarkable how much is there. When the Soviet Union was collapsing and there were possibly loose nuclear weapons, there was a good deal of international cooperation to deal with that problem.

So we have very immediate successes near us. Could we do better? Absolutely, but the idea of there being this general public movement where you get people to do the right thing by scaring them, I think that’s the opposite of how politics usually works. Voters like to live in denial, and if you scare people too much with, say, climate change, they respond by thinking it’s not actually all that significant.

I think some kind of more positive vision — you’re more likely to get people on the sustainability bandwagon. That’s one of the backstories to my book: I’m trying to give a positive vision, emphasizing less scaring the heck out of people and more, “Here are the glories at the end of the road, what you can do for your descendants and world history.” Scaring people seems to backfire in politics.

WIBLIN: We’ve been talking a lot about the possibility of a nuclear apocalypse here, and that is a somewhat trickier one to figure out how to solve. But you bring up climate change, where it seems like it’s a lot more tractable.

It’s pretty clear what kinds of technologies you could work on in order to reduce the risk of really runaway climate change if we get unlucky. Do you think it’s particularly valuable for people to go and work on technologies that differentially reduce the risk of catastrophes like climate change?

COWEN: Oh, absolutely. I think, the last few years, a lot of those technologies have made more rapid progress than I would have thought — like electric cars, like fracking. Just the interest in China in cutting back on their air pollution, solar, nuclear. Some of it’s still on the drawing board, but I think they really intend to do it and probably will.

So the progress in the fight against climate change, even in the last few years, is much higher than people think, even though we don’t see the results yet in terms of measurements of carbon emissions. I wouldn’t quite say I’m an optimist, but there have been big gains in the immediate past.

WIBLIN: Are there any other technologies that you’re excited about because they differentially improve civilizational stability?

COWEN: Well, everyone talks about batteries, but I often feel batteries are a mixed blessing. Batteries, of course, would make it much easier to have green energy, but batteries also ease the decentralized storage of power and carrying around of destructive power.

If, instead of a gun, which is awful, but it’s hard to kill 1,000 people just shooting a gun, right? If you have some kind of pack on your back with a battery and then an energy-creating weapon that you just walk around with, and you have crazy people doing this the way they do now with guns — that worries me. I still think, on net, better batteries are a plus, but it cuts both ways.

WIBLIN: It seems like the most important technology, from your point of view, might be the ability to surveil people so that we can prevent any group from using really concentrated energy to end human civilization, but also the social technology then to regulate that such that it doesn’t lead to totalitarianism. Do you think research into something like that could be very valuable?

COWEN: I worry a great deal about surveillance, which, of course, has proceeded most rapidly in China. If surveillance really would make us safer, that would be an argument for it. But surveillance tends to corrupt your rulers, and it tends to increase the returns to being in charge. I think, over time, it increases the chances of, say, a coup d’état or political instability in China.

Even though you have more stability at the ground level, you may have less stability at the top. I think this is one of the two or three biggest issues facing the world right now: What are we going to do with surveillance and AI, facial and gait recognition? I don’t think we know what to do. I would say I more worry about it than applaud it.

WIBLIN: I think I’m with you. I’m not sure whether more surveillance or less surveillance is better right now. But it seems like finding better ways to govern surveillance, given that we’re probably going to have quite a lot of it, so that it doesn’t lead to these negative political outcomes, could be an extremely important research question that more think tanks should be looking into.

COWEN: Yes, and it’s quite possibly true that the gains in surveillance we’ve had so far are what have limited some of the potential sequel attacks to 9/11. We can’t know that for sure as outsiders, but many people suggest this is probably the case.

WIBLIN: In terms of ways that humanity might end, we’ve got nuclear war, just a great power war between the US and China, even setting aside nuclear weapons. We’ve got climate change, and I guess, where we’re both concerned about, new technologies that would really concentrate energy that would allow a lot of destructive power.

What do you think about perhaps a fifth one on that list, which is a negative global totalitarianism because technology allows a negative political order to stabilize itself by monitoring people too much?

COWEN: Well, that may happen in China. One scenario is the Chinese government will simply clamp down on opposition through surveillance, and that will be stable for a very long period of time. That might make society in China worse, but I don’t see why it’s destabilizing, even if it’s undesirable.

WIBLIN: Oh, no, I don’t think it would be destabilizing. The problem is it would be very stable, but bad. We would still lose most of the value because, perhaps, we’ve locked in a bunch of negative or neutral moral values.

COWEN: That’s one of my big worries for the forthcoming future. You mentioned climate change. I don’t think it’s an existential risk; I do think the expected costs are maybe higher than most people want to admit, but the notion that it would wipe out human civilization as we know it, you would need a very extreme scenario. I don’t think that’s very likely.

WIBLIN: I agree with you. Yeah, I think it’s unlikely that climate change could lead to extinction. I mean, maybe it could lead to significant loss of life. One possible way it could go is that it turns out to be the temperature increases much more than we expect, so we get more like 6 to 9 degrees of warming. This sets us back economically, which triggers a negative cascade of consequences.

COWEN: Sure. That’s the most worrisome scenario. Keep in mind that regular air pollution — and I don’t mean carbon-based — just air pollution, right now it kills 6 to 7 million people a year. Obviously, a large number. Now, some of those are older people, are frail people. They might have died soon anyway, but still, it’s a number hardly anyone talks about.

Climate change right now is not killing 6 to 7 million people a year, and this we just absorb and move on. As you indicated, a lot of the risk of climate change is how it might set off other kinds of conflict.

WIBLIN: Yeah, or just the tail that we’ve totally mismeasured how it’s going to affect the climate. I think this is relatively unlikely, but perhaps is still worth having some people worry about.

COWEN: Or we could do bad geoengineering and make the world too cold.

WIBLIN: Okay, let’s move on to some other things in the book that I wasn’t entirely convinced by. You make the argument in one of the chapters that, even though our actions seem to have very large and morally significant effects in the long run, that doesn’t necessarily mean that we have incredibly onerous moral duties. We don’t necessarily have to set aside all of our projects in order to maximize the growth rate of GDP or improve civilizational stability. What’s your case, there?

COWEN: Well, I do think you have an obligation to act in accordance with maximizing the growth rate of GDP, but given how human beings are built, that’s mostly going to involve leading a pretty selfish life: trying to earn more, having a family, raising your children well. It’s close to in sync with common-sense morality, which to me is a plus of my argument. What it’s telling you to do doesn’t sound so crazy.

You don’t have to re-engineer human nature. So if someone from more of a Peter Singer direction says, “Well, all the doctors have to run off to Africa,” people won’t do that. We can’t and shouldn’t coerce them into doing that.

The notion that, by living a “good life” but making some improvements at the margin, that that’s what you’re obliged to do, I find that very appealing. It’s like, “Change at the margin, small steps toward a much better world.” That’s the subheader on Marginal Revolution. It’s also a more saleable vision, but I think that it accords with longstanding moral intuitions, shows it’s on the right track.

WIBLIN: Yeah, okay. It seems like, given your framework of long-termism, the moral consequences of our actions are much larger than what most people think when they’re only thinking about the short-term effects of their actions. In that sense, the moral consequences should bear on us more than they otherwise do.

COWEN: It’s very tricky, though. If you go around telling people, “Everything you do is going to change the whole world,” they’re going to get pissed off at you. They’re going to tune you out, so there’s a Straussian undercurrent in the book. The long term is really important, but people still need to focus to some extent on the short term to get to the long term. They can only handle so much computationally.

It’s not that I think the right answer is for everyone to be so attuned to the exact correct moral theory. They’re going to use rules of thumb. We’re going to rely on common-sense morality whether we like it or not — even professional philosophers will, and that’s okay, is one thing I’m saying. Just always seek some improvement at the margin.

WIBLIN: On the view that increasing GDP is a very important thing for people to do — among the most valuable things that they could do — do you think that people who are taking holidays, for example, or people who just aren’t starting a business that would grow as much as possible, or perhaps people who could go work in think tanks and do economic reform that would increase GDP much more than what they’re doing right now, would you at least . . .

Let’s say that they do have some moral concerns, so you’re not so concerned about them misreading your argument or getting angry and rejecting it. Would you say to those people that they do have a duty to do what . . .

COWEN: Absolutely, and I try to encourage them all the time. I try to hire them into think tanks, research centers. It’s one of my goals in the more practical side of my life, so absolutely.

WIBLIN: Let’s say it were the case that the best way to increase growth or to increase civilizational stability was to give very large amounts of money or to give away most of the income that you had to very poor people who could earn a greater rate of return. Would you advocate for people doing that?

COWEN: Absolutely. I’m a big fan of private philanthropy. There are quite a few very wealthy people who have pledged to give away most of their fortune. I’m a big advocate of that. I’m not sure they’re all giving it away in the right manner, and maybe they do have an obligation to think more critically about how they’re giving it.

So of course, but keep in mind, giving money to poor people does not always increase the rate of return. Sometimes wealthier people can earn yet more with the money and give more away later, so it’s not that you should always redistribute now.

WIBLIN: Another argument you make is that you want to have a strong grounding for human rights, but it seems like, on this long-termist framework, it’s possible that the consequences of actions could be so vast that it would dominate any rights concerns.

Then you make this argument that uncertainty about the consequences of our actions gives us a reason to still respect human rights. Do you want to put that argument?

COWEN: Well, let’s give a concrete example. Right now, in the northwestern part of China, the Chinese government is creating camps and detaining large numbers of people, by some estimates up to a million.

Now, some people in the Chinese government say, “Well, this is going to help us in the longer run. We’ll be more stable. We’ll grow more rapidly.” I’m very skeptical of that, but you might say, “Well, there’s some chance they’re right.” I think it’s unlikely, but . . .

WIBLIN: But let’s say you believed that.

COWEN: There are gross violations of human rights. When there’s this deep uncertainty about the future, you’re not comparing directly — well, detaining all these people versus the brighter, richer future. It’s like a lottery ticket, and the lottery ticket is so uncertain, it’s easier to respect the human rights.

You say, “Well, look. These are gross violations of rights. There’s really not a guaranteed payoff at all. It’s highly uncertain, at best. It may even be destabilizing.” And then I’ll say, “Just don’t do it, and your consequentialist conscience is not knocking on the side of your skull so hard.”

WIBLIN: Yeah, but it seems like you’ve got massive uncertainty on both sides of the ledger here when you’re comparing the thing where you violate human rights but you get some massive GDP gain versus the case where you don’t and you don’t get the GDP gain.

It’s entirely plausible that both of those actions could be both very positive and very negative because the future is just so unforeseeable. But I don’t see why it breaks in favor of the human rights case rather than just increasing GDP because it’s better in expected value terms.

COWEN: Again, if you think there’s any case for deontology at all, there’s not an argument deontology can wield to overturn the consequentialist conclusion in consequentialist terms. You’re just stuck with, “Don’t do it.”

Nowhere in the book do I try to outline how far do those human rights extend. It’s partly beyond the sphere of my expertise. Also, I’m genuinely uncertain, but it seems to me that their sphere is not zero, not so absolute that everything, or even most things, are about deontology.

WIBLIN: I wasn’t sure whether to challenge you on this because I think, actually, it is good to promote the idea of human rights and to lock those into law.

I guess both moral uncertainty reasons — that it’s possible that violating human rights is just absolutely wrong and no number of consequences can compensate — and also just because it seems like a better rule to follow, that in fact it will lead to a better future, because GDP isn’t everything. Institutions matter a lot more, and concern for welfare as well.

COWEN: I fully admit I punt on the human rights issue. The book is about growth. I just want to reassure people, “You don’t have to go crazy and become an evil person to maximize growth.” But it would require another book, actually, a much longer one, and a lot of books should be organized around just one idea, one key idea.

WIBLIN: In the book, you s