Transcript

Rob’s intro [0:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.

Welcome back to the new year! We’ve got lots of great episodes working their way through the pipeline for you.

Will MacAskill’s first appearance on the show was a crowd pleaser, and I expect this one to be equally or more popular. Among other things, we talk about why apparently commonsense moral commitments may imply you should sit at home motionless, and how to fix this problem. What an altruist should do if they’re risk-averse. Whether we’re living at the most important time in history. And Will’s new views on the likelihood of extinction in the next 100 years.

While we’ve covered some of these issues before, Will has a lot of new and sometimes unexpected opinions to share. And we’ve got chapters in this episode to help you skip to the section you want to listen to.

Before that a handful of quick notices.

First, you might be pleased to know that people who started listening to our previous episode with David Chalmers, our longest release yet, on average finished 59% of it. The whole episode is 4h42m long, and on average people are making it through 2h45m of it. That’s fantastic commitment, and I think vindicates our in-depth approach to interviews.

Second, in my last interview with Will we discussed the book he was writing at the time, called Moral Uncertainty. That book is now coming in late April and you can pre-order on Amazon. We’ll include a link to it in the blog post associated with the show. Of course, keep in mind this is a pretty academic book, so it’s suitable for people who are really into the topic.

Third, as we discuss in the episode Will works at the Global Priorities Institute at Oxford. GPI is currently hiring Predoctoral Research Fellows in Economics. This is a great opportunity for anyone who might do a PhD in economics, or is already doing one.

Unfortunately the closing date for expressions of interest is only 12 hours after I expect this episode to come out — midday UK-time on Friday the 24th of January. We’ll link to that job ad in the show notes in case you might have a shot at getting an application in really quickly.

But more generally, I know many economists or aspiring listen to this show, and the Global Priorities Institute is very interested in getting more economists on its research team, or having more future economists visit in some form. So if you find this conversation interesting, you should go fill out the expression of interest form at globalprioritiesinstitute.org/opportunities/ . It only takes a few minutes to do that, and I’ll know they’ll be interested to hear from you.

You can also, as always, find dozens of other opportunities on our job board at 80000hours.org/jobs

Finally, before we get to the interview, in this episode we bring up a wider range of global problems than we usually have a chance to cover.

For practical reasons we as an organisation have to focus on knowing a serious amount about a few particular problem areas first. We’ve mostly worked on AI risk, biorisk, and growing capacity for people to do good (e.g., through doing global priorities research) because we think these issues are really important and focusing on them is how we can move the needle the most. However, this has meant our research is on a narrower range of things than the full portfolio of challenges we think our readers would ideally work on solving.

So we’re excited when we see people exploring other ways to improve the long-term future, especially ones that they personally have uniquely good opportunities in.

We’ve now got a decent list of some ideas for how people can do that at 80000hours.org/problem-profiles/. If you scroll down to the subheading ‘Potentially pressing issues we haven’t thoroughly investigated’ there’s a few dozen listed with brief analysis, which might help you generate ideas.

Alright, without further ado, here’s Prof Will MacAskill.

The interview begins [0:04:03]

Robert Wiblin: Today, I’m speaking with Will MacAskill. Will will be well known to many people as a co-founder of the effective altruism community. He’s an associate professor of philosophy at Oxford University currently working at the Global Priorities Institute or GPI for short, a research group led by Hilary Greaves who was interviewed back in episode 46. Will has published in philosophy journals such as Mind, Ethics and the journal of Philosophy and he co-founded Giving What We Can, The Centre for Effective Altruism and our very own 80,000 Hours and he remains a trustee on those organizations’ various boards. He’s the author of Doing Good Better, another forthcoming book on moral uncertainty and is in the process of brewing a new book on longtermism. So thanks for attending to the podcast, Will.

Will MacAskill: Thanks for having me on again, Rob.

Robert Wiblin: Yeah. So I hope to get to talking about whether we are in fact living at the most important time in history. But first, what are you working on at the moment and why do you think it’s a good use of your time?

Will MacAskill: Well, the latter question is tough. Currently, I’m splitting my time about threefold. One is between ongoing work with the Centre for Effective Altruism and issues that generally come up as a result of being a well known figure in the EA movement. That’s about a quarter of my time or something, another quarter of my time is spent on the Global Priorities Institute and helping ensure that goes well, helping with hiring, helping with strategy and some academic research. And then the bulk of my time now, which I’m planning to scale up even more, is work on a forthcoming book. Tentatively, the working title is “What We Owe the Future”, which is aiming to be both readable by a general audience, hopefully also something that could be cited academically and really making the case for concern about future generations when combined with the premise that there are so many people in the future, the overwhelming importance of future generations and then exploration of well, what follows if you believe that, and so for that purpose, I’ve been on a speaking tour for the last few weeks, which has been super interesting.

Robert Wiblin: Talking about longtermism?

Will MacAskill: Yeah, I’ve tried to create a presentation that’s the core idea and core argument of the book in order that I can get tons of really granular feedback on how people respond to different ideas and what things I’m comfortable saying on stage. Sometimes I think I might believe something, but then do I believe it enough to tell a room full of people? Sometimes I feel like my mouth starts making the motions, but actually I don’t really believe it in my heart and that’s pretty interesting. So I’ve been quite scientific about it. Everyone in the audience gets a feedback form, they get to score how much they knew about the ideas beforehand, how convincing they found the talk. My key metric is, of people who put four or less for the first question, which is how much they know, what proportion of them put six or seven in terms of convincing us of the talk out of seven? And I’ve given a few different variants of the talks.

Robert Wiblin: Yeah. What fraction of people are you convincing? Roughly?

Will MacAskill: About 50%, yeah. And maybe the data’s quite noisy and so it’s not like actually a scientific enterprise. Maybe I’ve got that up to 60% or something over the course of the period.

Robert Wiblin: In terms of the granular feedback, have you learned kind of any arguments that don’t go down well or some that particularly do?

Will MacAskill: Yeah, I feel like I’ve learned a ton actually, and I’m still processing it. One big thing for sure is that many more people are willing to say future people just don’t matter than I would expect. So in moral philosophy, the idea that when you’re born or when you’re interests are affected is morally irrelevant, that’s just absolutely bread and butter, no one would deny that. And so I think I just would assume that to the wider world, especially if I’m talking at campuses, kind of lefty audiences in general, but no, actually significant, probably the most common objection is–

Robert Wiblin: That future generations don’t matter.

Will MacAskill: Yeah. Or just why should I care? Yeah. And then the second thing that was most interesting is I was expecting a lot of pushback from the environmentalist side of things where I do talk about the importance of climate change. I talk about the fact that species loss is another way of impacting the long run future, but they’re not the focus of the talk. The focus is on other kind of pivotal events that could happen in the coming 50 years. And I was expecting to get more pushback from people who are like, any moment you’re not talking about climate change is attention taking away from the key issue of the time. And certainly some people said that, but I think just the proportion of people on campuses who are, let’s say kind of deep environmentalists is lower than I would’ve thought.

Robert Wiblin: Interesting. Yeah. That’s maybe one of the most common pieces of feedback on the podcast in terms of content or substance, is that we don’t talk enough about climate change or don’t think enough about environmentalism or deep ecology and things like that. So yeah, I’m kind of surprised that maybe that’s not the case. Was it one of the more common kind of substantive critiques that would be–

Will MacAskill: Yeah, it was still because it was very spread the way people would object, that definitely came up. Maybe I think just people were more happy with the fact that I was saying, “Yes, climate change is a super important issue, it’s not the focus of this talk”, and people were actually quite open to other issues. Another little piece of evidence there, Bill McKibben, who’s an academic at Middlebury College I think and has many years been an environmental activist.

Will MacAskill: He’s got a recent book where it’s actually, it’s funny, it’s making almost exactly the same argument that I wanted to make, which I’m very happy about, but that we should be really concerned about genetic engineering of humans and artificial intelligence and he’s saying, “Back 30 years ago when climate change was like just nascent, you could really change the policy landscape. Things hadn’t gotten entrenched or sclerotic, as you love to say, Rob. On the podcast… I’ve never heard the word sclerotic as much as–

Robert Wiblin: Personal favorite, yeah.

Will MacAskill: But now it’s the case that for advances in biosciences and AI, we’re in that situation at the moment. And so it was really great actually seeing someone who’s this, you know, long-time climate activist coming up with the same framing of things that I was thinking.

Robert Wiblin: Are there any arguments that have gone down better than you’ve expected?

Will MacAskill: I talk about the fact that future generations are disenfranchised in the world today. They don’t have a voice, they don’t have a political representation, they can’t trade, bargain with us. That goes down pretty well.

Robert Wiblin: This kind of reminds me of, I guess I’ve seen people try to write books by posting chapter by chapter on blogs or I guess like posting their ideas on Twitter or Facebook and I’m getting feedback that way. So it’s just like an in-person way of doing that.

Will MacAskill: Yeah, exactly. It’s kind of the thing that I most thought of was like the Y-Combinator advice of talk to your users where Anne and I ran a study on Positly, you know, trying to get some general sense of how people react, but the sorts of people I’m wanting to talk to are undergraduate and graduate students on college campuses who’re pretty close to the target demographic and so nothing really substitutes for being able to interact directly with those people and see what things go down well and which things don’t.

Robert Wiblin: Do you worry that this could lead you to kind of pander to the median like undergraduate student and where that’s like maybe not like as intellectually honest or like maybe appealing to you? Maybe you just want to like talk to the people who are most keen on this view rather than like try to get the person who’s like kind of resistant to be okay with it.

Will MacAskill: Oh, great. Well it’s interesting… I thought you’re going to go a different way, which is there’s definitely a tension I’m feeling, which is between the pull towards the general audience and the pull towards kind of academic seriousness. And so I’m trying to get those at GPI somewhat invested in the book too so that I have them pulling me on one side and then probably the publishers and so on are pulling me on the other side. Then there’s a question of well are you trying to make it just okay for everybody versus some people who really love it? I think I’m aware of that. So my key metric was proportion of people who love it.

Will MacAskill: I would have actually gone for the p-metric of just proportion of people who give it seven but that was just too noisy. So six or seven and it turns out people at Cambridge just don’t give extreme scores. I didn’t get a single score that was above six or below four.

Robert Wiblin: Interesting.

Will MacAskill: Yeah, something like that. From like a hundred people.

Robert Wiblin: Is that something with the grading? Cause I know at Cambridge, right, they don’t give like or it’s very rare to get a score kind of above 85% or something.

Will MacAskill: Maybe–

Robert Wiblin: Or just, “We just don’t use sevens–”

Will MacAskill: I just thought maybe it’s British people versus Americans who’re like more likely to–

Robert Wiblin: So you did it in America as well?

Will MacAskill: Oh yeah, I did Stanford, Berkeley, Harvard, MIT, Yale, NYU, Princeton.

Robert Wiblin: Any big differences other than the scoring?

Will MacAskill: Yeah, I think more driven by the nature of the local groups rather than by the universities.

Robert Wiblin: I was thinking maybe Americans are systematically different than Brits.

Will MacAskill: Oh, okay. Yeah, I mean obviously they are. Brits, especially on college campuses where Brits are more academic on average, obviously you get super-academic people in US universities, but you know, Oxford and Cambridge select only on the basis of academic potential, whereas there’s more of a sense in the US of wanting well-rounded individuals. Don’t get them in Britain!

Will MacAskill: There’s legacy students, there’s athletes… there’s people who are just future politicians. And that varies by school. But then on the other hand, people in the US are more entrepreneurial. They’re more kind of go get it and I think the cultural difference is quite notable actually.

Robert Wiblin: So it seems like your life has changed a fair bit over the last few years. You used to kind of be like more in doing organizational work and now you’re like more on the academic track.

Will MacAskill: That’s right, yeah. I think it’s like there’s been two waves which I’ve vacillated between, where I obviously started very much in the academic track, then there was a period of several years of setting up, Giving What We Can , 80,000 Hours, the Centre for Effective Altruism, where I was clearly much more on the, you know, setting up these nonprofits operational side, then writing Doing Good Better and finishing up my PhD. That was a long chunk when I was back in research mode. And then at that point I was moved back to then working closely with 80,000 Hours and then running the Centre for Effective Altruism. And now I finally, the outside view doesn’t believe this, but I’ve finally settled.

Will MacAskill: Yeah. I mean I think I extended… The period when I was a graduate student, I think it made sense to be kind of riding two horses at once. I think I probably should have focused earlier than I did, whereas now I’m feeling much more confident that I’m kind of making my bets and that makes sense. EA is big now. It’s going to last a long time. We no longer need like people trying to spin plates all over the place. We need more people who are willing to commit for longer time periods and that applies to me, as much as it does to tell anyone else.

Robert Wiblin: Yeah, are you enjoying this phase more? You seem kind of happier these days.

Will MacAskill: Yeah, I’m enjoying it a lot more actually. You know, that’s not a coincidence. There’s a whole set of things that are correlated with each other where the causation goes both ways of what I enjoy most, what I think I’m best at, where I think I’m having the most impact. And it’s definitely, yeah, clear to me that if I think about running an organization and so on, like I can do it. I don’t think I’m like 99th percentile good at this thing. Whereas I think the thing that I’m best at and then end up enjoying most and then also think that I get the most impact out of is this kind of in between academia and the wider world. Taking ideas, not necessarily being the, again, the 99th percentile kind of academic, not being Derek Parfit or something, but instead being able to crystallize those ideas, get to the core of them and then transmit them more widely.

The paralysis argument [0:15:42]

Robert Wiblin: Let’s turn now to this paper that you’ve been working on, which I really enjoyed because it’s got kind of a very cheeky angle that’s kind of maybe cornering people who I am inclined to disagree with already anyway. It’s called the paralysis argument and you’ve been writing it with your colleague Andreas Morgensen. It has this pretty fun conclusion… how would you describe it?

Will MacAskill: Okay, well I’ll start off by, it’s not even a thought experiment. Just a question. So I know you don’t drive, but suppose you have a car. Suppose that you have a day off and you’re not sure whether to stay home and watch Netflix or to do some shopping and you think, well suppose you go shopping and you drive to various different places in London and you know, buy various things and come back, over the course of the day. And my question is, “How many people did you kill in the course of doing that”?

Robert Wiblin: Well, I know the answer, but I guess the naive answer is no one.

Will MacAskill: The naive answer’s nobody. And I’m not talking about, well, you could have spent your time making money that you could have donated. I’m not talking about your carbon emissions. Instead what I’m talking about is the fact that over the course of that day, you have impacted traffic. You’ve slightly changed the schedules of thousands, made probably tens of thousands of people and well on average over the course of someone’s life, which is I think is 70,000 days, a person will have about a child. So if you’ve impacted 70,000 days, then statistically speaking, you’ve probably influenced the exact timing of a conception event. And what does that mean? Well, that actually means you’ve almost certainly changed who got born in that conception event. In a typical ejaculation there are 200 million sperm.

Robert Wiblin: Things you learn on this show!

Will MacAskill: I know, it’s a little factoid that everyone can–

Robert Wiblin: Dine out on?

Will MacAskill: Yeah, exactly. So if two people who are having sex and going to have a child, if the timing of that event changes ever so slightly, even by 0.1 of a second, almost certainly it’s going to be a different sperm that fertilizes the egg. A different child is born, but now you’ve got a different child that’s born, they’re going to impact all sorts of stuff, including loads of other reproductive events and so that impact will filter out over time, and at some point it’s hard to assess exactly when, but at some point, let’s say it’s a hundred years time, basically everybody’s a different person. But if you’re having such a massive impact over the course of the future by driving to the shops, well one thing that you’re going to have done is changed when very many people die. So even just looking at automobile accidents, I think 1-2% of people in the world die in car crashes. And it’s obviously very contingent when someone dies. So over the course of this next hundred years, when all these identities of people are becoming different as a result of your action to drive to the shops, that means that for loads of people who would have existed either way, they will die young, they will die in a car crash that they wouldn’t have otherwise died in.

Will MacAskill: And now in expectation, exactly the same number of people will have been saved from car crashes and will die later than they would’ve otherwise done. And that’s where the distinction between consequentialism and nonconsequentialism comes in. So from a consequentialist perspective, if you’ve caused early deaths of a million people and extended the lives by just the same amount of a million other people, well that’s just the same, it washes out. So the consequentialist doesn’t find anything troubling here, but the nonconsequentialist endorses the following two claims. And I’m not saying all nonconsequentialists do, but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit.

Will MacAskill: And if you have those two claims, then you’ve got to conclude that these perhaps millions of people, certainly hundreds of thousands of people who have died young as a result of what you’ve done. The fact that you’ve caused those deaths is worse than the corresponding amount of benefit from the fact that you’ve saved the same number of lives. And so if you want to avoid causing huge amounts of harm that are not offset by the corresponding benefit, then you ought in every instance where you might affect the other reproductive events. You ought to just do nothing. Do whatever is the omission. And in the paper we don’t take a stand on what counts as omitting. It could be the Jain practice of sallekhana where you sit motionless until you slowly starve to death, where the Jains were defending that view as the best way to live on the grounds of do no harm. It could also be just you act on every impulse, you go with the flow, but whatever this nonconsequentialist view decides is an omission.

Robert Wiblin: It doesn’t sound like a normal life?

Will MacAskill: It’s not going to be a normal life. Yeah. You’re going to be extremely restricted in what you can do.

Robert Wiblin: Yeah. Okay. So the basic reason that you’re getting this result is that we’re drawing an asymmetry, a distinction between creating benefits and causing harms. And so inasmuch as you’re creating like lots of benefits and lots of harms, it just seems like everything’s going to be forbidden or like, yeah, I guess everything except… Unless you can find some neutral thing that you can do that doesn’t cause benefits or harms it in the relevant sense.

Will MacAskill: Yeah, exactly. And so yeah, for clarification, this audience who probably is very familiar with utilitarian moral reasoning, consequentialist moral reasoning, it’s a kind of wider project at the Global Priorities Institute of thinking, well how does longtermism look if you’re taking alternative moral views? And so this is kind of one example of, well, if we’re reasoning seriously about the future and we’re nonconsequentialist in this paradigmatic way, what follows?

Robert Wiblin: Yeah, so that’s kind of intuitive, but I guess I found the paper as I was trying to read it a little bit confusing. It’s very philosophical.

Will MacAskill: It’s kind of dense. It’s definitely a philosopher’s paper.

Robert Wiblin: Yeah. I guess there’s also this distinction between, so there’s this distinction between harming people and benefiting them, and then there’s also this thing between harming people and allowing people to be harmed, which seems to be relevant. Do you want explain why you end up talking quite a bit about that?

Will MacAskill: So the key question and if I was going to guess at what is the most promising strand for nonconsequentialist to try and respond to this is to say, “Well, yes, we think that in general there’s a distinction between actions and omissions”. So for example, most people intuitively would say that if I saw you Rob, drowning in a shallow pond and then walked on by, that would be very wrong. But it wouldn’t be as wrong as if I strangled you right now. That’s quite intuitive that there’s a difference there.

Will MacAskill: Yeah. But there’s a question of, well, okay, what if I kill you via driving to the shops and then causing different reproductive events that then have this kind of long causal chain that results in your death? Is that still an action or is it an omission? There’s definitely a sense in which it’s an act. Intuitively it kind of seems like an action. Like I’ve moved. I did this like positive thing of driving to the shops. I didn’t know it was going to kill you. That’s a distinction. But, it still seems like a positive action. But perhaps the nonconsequentialist can come up with some way of carving that distinction such that for these kind of very long run causally complex effects, they just count as omissions or something.

Robert Wiblin: Okay. So they would try to get out of it by saying, “Oh, you’re not actually harming them. You’re merely allowing them to be harmed because what you’re doing wasn’t an action in the relevant sense of like actively causing someone harm.

Will MacAskill: That’s right.

Robert Wiblin: And so that’s going to get us out of this thing that like harming people is not merely bad, but prohibited.

Will MacAskill: That’s right, but then the question is, well, can you have an account of acts and omissions that satisfies that. That gives us that answer and that’s where it starts to get very in the weeds and more technical because the existing accounts of acts and omissions, it gets quite complicated. There is one account that is kind of independently very influential, which is Bennett’s account. On this account, supposing I make something happen or I caused some event to happen. That is an action if the way you would explain that happening involves some bodily movement of mine that is a very small part of the overall space of all bodily movements I could have taken.

Robert Wiblin: Very intuitive; I think that’s what everyone meant all along.

Will MacAskill: I mean I think it’s actually kind of good to say.

Robert Wiblin: I guess at first blush it sounds–

Will MacAskill: First part is really complicated. I always forget exactly how to state it too.

Robert Wiblin: So, okay. We started out with this intuitive thing that if your actions cause harm it’s worse than if your actions cause benefit and indeed like actively harming people through your actions is probably prohibited. And then we’ve ended up with this kind of absurd conclusion that like any actions that you take are probably forbidden ethically. I guess one has to suspect that something’s gone wrong here. Right? Because it’s so counterintuitive. So I suppose, yeah, as much as I’d love to skewer deontologists and find ways that their views are incoherent, you’d have to hope that there’s probably some solution here. There’s some way that they could patch the view that saves them. Do you want to discuss the various different attempts that one could make?

Will MacAskill: Yeah, I mean, it’s not totally obvious to me. Like I do treat it as a reductio. So if I was a nonconsequentialist I’d want to give up one of my starting premises rather than endorse that conclusion. It’s not totally obvious. Like it seems to follow from my perspective quite naturally from the underlying intuitions that are under-girding this style of nonconsequentialism, which is, well, it’s worse to harm than to benefit. And we happen to be in this world, which is so incredibly complicated that your actions inflict huge harms. But I agree and you know, from the feedback we’ve gotten, nonconsequentialists, well actually there was one, a journal we got kind of to the last stage and it was a vote among the editors and they all decided they didn’t like the paper, but for different reasons. But one of whom was like, why is this a reductio? She just endorsed the conclusion.

Robert Wiblin: A Jain, perhaps.

Will MacAskill: People vary a lot.

Robert Wiblin: So someone who’s sympathetic to consequentialism just looks at this and says, “Oh, this just demonstrates the problem with the asymmetry between harm and benefit”. So to a consequentialist who doesn’t find the appeal of that, it’s just very easy to be like, “Well, I just never thought there was an asymmetry to begin with and that’s no problem now.

Will MacAskill: Yeah, exactly. That’s what I think the rational thing to do is. I think it’s like a way of demonstrating that we shouldn’t have had that asymmetry. But then that’s really important because even if you might think, well, I’m worried about consequentialism in other contexts or something, it means that when it comes to thinking about the long run future, we can’t have a harm-benefit asymmetry. And that’s important. You know, consider a carbon tax or something. Yeah. What level of carbon emissions should we try to get to? Well, the economist says, “Well, there’s some social optimum at which if we were to tax carbon beyond that, then the harm to ourselves would outweigh the harms to others”, or in fact the benefits in burning coal. But if you’ve got this harm-benefit asymmetry, you need to go further than that because I’m just benefiting myself by burning fossil fuels, but I’m harming someone else. And if I’ve got this harm-benefit is limited to, I need to get the amount of carbon we emit as a society, not just down to some low level that would be guaranteed by a significant carbon tax, but actually down to zero. So it really does make a difference, I think, for how we think about the long run.

Robert Wiblin: Yeah, interesting. Though people who think that it’s not okay to harm people usually do have like all kinds of exceptions. So if like I say something mean to you that is true, but I think it’s actually good. Well, it’s good for the world that you know this like bad thing that you’ve done, but it’s going to make you feel sad. Most people don’t think that that’s impermissible. Most people don’t think it’s like impermissible to drive just cause they’re polluting. I mean maybe it’s more questionable the second case.

Will MacAskill: Yeah, you at least need to have some sort of explanation of why. Yeah, why it’s not wrong to harm where perhaps that there’s some implicit social contract that we can all drive and everyone benefits. Perhaps you’re assuming the other person consents. If you’re a surgeon–

Robert Wiblin: Yeah, but in that case you’re not harming them overall. If we really thought that you could just like never take actions that left other worse off, but that’s maybe, where we’re going here. But even just on like on a normal level, you’d be like, “Well, you can’t give your like colleague negative feedback. It’s like, Hmm, you can’t end a relationship that you really hate”. Just like all kinds of actions would obviously just be prohibited even though they’re kind of good for the world.

Will MacAskill: Yeah, I mean, I think in those cases there’s a couple of things to say. One is almost all nonconsequentialists would say it’s still about weighing the benefits and harms. So if the benefits are great enough, then it’s okay to inflict some harms, especially if the harms are small. The second would be that, yeah, not all harms count. Perhaps just negative feedback or the harm of, you know, having your heart broken is just not the sort of harm that counts, morally speaking. For this argument, you know, you’re killing people. That’s certainly the sort of harm that counts, morally speaking. And then the third thing I think would just be, I think there might be lots of things that one implicitly signs up for. So if you take a job, I think you know, you’re implicitly agreeing to get negative feedback if you’re not doing well enough. If you enter a relationship, as part of that, you’re understanding that you may be broken up with. And that of course would be okay because consent can make harms permissible.

Robert Wiblin: Okay. Interesting. Although I guess in the pollution case, you might think, “Well you just can’t drive cars cause it’s causing harm to total strangers far away who’ve never consented”.

Will MacAskill: I mean, yeah, I’m actually–

Robert Wiblin: I guess that doesn’t sound so crazy to me to be honest.

Will MacAskill: Yeah. It doesn’t sound so crazy. I mean, perhaps they say the harms are small enough, perhaps if the pollution is just within your own country, then there’s a kind of implicit social contract.

Robert Wiblin: Or what about the person who voted against the contract?

Will MacAskill: It gets complicated.

Robert Wiblin: Yeah, this is all a bit of a big diversion. So what kinds of methods would someone who wanted to say, “Well, I do believe in the harm-benefit asymmetry, but nonetheless I don’t want to buy this paralysis thing”. How can they find some way of escaping the conclusion of paralysis?

Will MacAskill: Great. I think there’s some ways that don’t work and some ways to do so. One thing you might be inclined to say. I’ll start with the ones that don’t work. One thing you might be inclined to say is that, well, when the consequence goes via the act of another, then it doesn’t count. So you know, in these cases where I do some action, it will be via other people’s actions that then the harm ultimately is committed.

Robert Wiblin: So it’s mediated now by someone else’s choices.

Will MacAskill: Yeah, exactly. You might think, “Oh, that absolves me of any kind of responsibility”. But the thing is, just intuitively imagine if you’re selling arms to some dictatorial regime. That’s not, you know, intuitively you know that though that dictatorial regime is going to use it to kill minorities in the country. It doesn’t seem like the fact that the harm is mediated by the dictator and their armies, it doesn’t seem to absolve you of guilt of selling the arms to that dictator. So I think that kind of initial response just doesn’t generalize. It doesn’t seem like actually this is something that we would want to endorse.

Robert Wiblin: Yeah. Do you think in the cases where we actually dive down and think about concrete cases where you take an action and very foreseeably it’s going to cause or allow someone else to do a lot of harm, that in general we would reject that as absolving you?

Will MacAskill: I think so. Yeah. I think so.

Robert Wiblin: So what else might someone try?

Will MacAskill: Well, one thing someone might try is just when the consequences are sufficiently causally distant from you, then they don’t matter.

Robert Wiblin: Causally distant in what sense? Like lots of steps?

Will MacAskill: I mean there could be various ways you could unpack it.

Robert Wiblin: Lots of different actors in the meantime.

Will MacAskill: Yeah, perhaps various steps. But again, I think this just isn’t intuitive. Imagine if you see someone who has built this incredible, Rube Goldberg machine was the word I’m looking for. So someone has built this incredibly complex machine. It has all sorts of levers. In fact, it’s just this box that you don’t even know how complex it is inside. And it could it be indefinitely complex. But the input is that someone pushes a button, the output is that someone else dies. Well then it just seems completely irrelevant how causally complex the interior workings are. It also seems irrelevant, like if it’s 20 years delayed or something. If someone knows that that person’s going to die, then well that’s clearly wrong.

Robert Wiblin: Yeah or like if you leave a bomb somewhere that’s not going to go off for a hundred years but you know there’ll be someone in the house and it’ll blow up the house and people will die. The time doesn’t matter. And if you went with the box thing and you’re like, “Oh, but I feel bad pressing the button and then someone dies” and they’d say “Well, I’ll just add more levers and even more causal steps” to make you feel better now.

Will MacAskill: Yeah, it’s really unconvincing.

Robert Wiblin: What about, I guess foreseeability is that another thing that people might say?

Will MacAskill: Well, so the issue is that at least once I’ve told you this argument, the harms are foreseeable now. They’re not foreseeable for any particular person. But again, that doesn’t seem again, you know, imagine I’ve left a bomb in the forest. There’s no particular person that I think has been made worse off but it’s foreseeably going to harm somebody. Well that seems wrong. Another intuitive case, we call it the dice of Fortuna, where you know this Goddess Fortuna, gives you this box, with a set of dice in it. And if you roll the dice and it’s above the average value of the dice, then someone’s life is saved. If it’s below the average value of the dice then someone is killed. And by shaking this box you get a dollar. Ought you to do it? And the consequentialist will be like, “Sure! A free dollar!” But the nonconsequentialist I think should say no. And I think the nonconsequentialist would also find it intuitive that you should say no.

Robert Wiblin: To be honest, as a consequentialist leaning person person, I also find it a bit horrifying, but I don’t feel like I would.

Will MacAskill: Okay. You don’t feel like… Interesting.

Robert Wiblin: Well, I mean I guess even just a little bit of moral uncertainty. Cause the dollar is such a small amount which is going to trump it. So yeah.

Will MacAskill: But that’s very close now to the situation that we’re actually in. Because again, you don’t know who these people–

Robert Wiblin: You’ve only got a tiny benefit and then like hundreds died and hundreds lived.

Will MacAskill: Exactly. Which seems very similar to this dice of Fortuna case.

Robert Wiblin: Yeah. So I guess, why does it then seem so counterintuitive to us. Is there something about like, so we’ve thrown together both like the long causal chain and the intermediation by others and the unforeseeability of it and like maybe all of these things together are just weakening the intuition that these actions are wrong.

Robert Wiblin: I’m curious to know, do many people encounter this and say, “I actually find the paralysis thing kind of intuitive”, cause I guess there’s some sense in which it’s intuitive that, well, your actions are harming all of these people. I’m like, so maybe you just can’t do all these things that you thought were totally normal.

Will MacAskill: Well, yeah. One of the reviewers’ comments. Some people, one person–

Robert Wiblin: You have an existence proof.

Will MacAskill: One out of eight of the editorial board have thought that. But yeah, unfortunately I think the reviewers we’ve had have not really engaged with the thought experiment to like, do they say, “No, there is some difference between the dice of Fortuna case and the case of driving to get some milk”, or do they say in both cases actually it’s permissible. And I really think that they’ll have to say in both cases it’s permissible.

Robert Wiblin: Is there any difference if it’s like… if you shake the Fortuna thing then you can go to work, then you can do your normal day to day business. If you say it’s not, you’re getting a dollar, but it’s like now you’re permitted to go and actually like live a normal human life, then maybe it seems more permissible and like, and whenever you stop shaking the box like once, now you just have to like stop moving. That doesn’t sound so good. But then it’s like now we’ve made it exactly the same as the other case.

Will MacAskill: Yeah. If it’s extreme enough. But, it’s not exactly the same cause it’s not one person. It’s hundreds of thousands of people every time. Perhaps you might think, well there are some things that morality can’t require of you. So you might have the view, again as a nonconsequentialist, that morality just can never require you to sacrifice your life no matter how great the stakes. Okay, fine. But we were just talking about you doing the groceries. If you’re now at the stage where you’re going to starve to death if you don’t do the groceries, okay. At that point it’s permissible. But the vast majority of our actions are, you know, you’re driving to the cinema or something.

Robert Wiblin: So the Jains just sit there until they die. I suppose another one would be to like try to totally causally cut yourself off from any other humans. So like go and live as a hermit in the forest or go to Siberia and then like try to make sure you have no interactions with other humans.

Will MacAskill: So you could try and do that, but the course of doing that, you know, buying all the canned goods would itself involve huge amounts of harm.

Robert Wiblin: This is the least bad perhaps. Cause it’s like once you managed to get far enough away, then yeah. Anyway. Okay. So I think there’s at least one more attempted solution here, which is maybe the one that I found the most intuitively appealing, which is that it’s Pareto acceptable to everyone ahead of time that you go to the shops because it’s like, if you could survey everyone on Earth, and said, “Well, do you personally prefer that the person doesn’t go to the shops because it’s like unforeseeable who’s going to be benefited and who’s going to be harmed”? They would consent to it if you could ask them or it’s like they don’t see themselves as being worse off because of it. Do you wanna explain this one a bit more?

Will MacAskill: Yes. So I thought you explained it quite well. So in economics and philosophy, a Pareto improvement is where some people are made better off and no one is made worse off. An ex ante Pareto improvement is where there’s going to be some gamble — So perhaps everyone enters a lottery where it costs $1 but you’ve got a 50-50 chance of getting $10. Now 10 people enter that lottery. It’s ex ante a Pareto improvement cause everyone’s prefers that lottery to no lottery. But it does make some people worse off and some people better off ex-post. And, when I drive to the shops, absolutely everybody I affect, if I could ask them, “Do you care if I drive to the shops or not?” before they know what’s going to happen, they would say “No, I’m indifferent because it’s as likely that you will bring forward my death as that you will postpone my death”.

Will MacAskill: So indifferent for most people. And then you get a small benefit. You get the benefit of whatever you bought at the shops. So this is better for some people and worse for none. And I think this could be a way out. But it is itself going back on, it takes you quite a long step towards consequentialism is the key thing.

Robert Wiblin: Because now we’ve said if you cause harm, but it wasn’t foreseeable that like a specific person ahead of time was going to be harmed and would have like told you not to do it, then it’s okay.

Will MacAskill: Yeah. Well let’s just consider some thought experiments, so suppose that the government is deciding, it’s like, “Well, there’s just this organ shortage. People are dying because they don’t have kidneys. So what we’re going to do is hold a lottery and people will be selected at random, they’ll be killed, their organs will be transplanted to save the lives of five other people”. That’s better from everyone’s perspective. It extends everyone’s life.

Robert Wiblin: I’m with you!

Will MacAskill: Rob’s signed up already. But intuitively, from a nonconsequentialist perspective, intuitively that’s wrong. It’s impermissible. And in fact, going a little bit deeper, There’s a theorem by John Harsanyi, his aggregation theorem, which is that if everyone’s wellbeing is structured in a way that satisfies some pretty uncontroversial axioms and the way you aggregate satisfies ex ante Pareto, then you end up with utilitarianism. And so if you endorse ex ante Pareto as a nonconsequentialist, you’re not going to go all the way to consequentialism or utilitarianism, but you’re going to get much, much closer I think. And so yes, this is a way out, but it is undermining a significant other commitment that the kind of paradigm nonconsequentialist won’t want to hold on to.

Robert Wiblin: Yeah, so this just reminded me, I think a couple of years ago we were laughing at economists because economists are obsessed, at least in some strands of thought with Pareto improvements and things aren’t good unless they’re Pareto improvements. So everyone has to be either indifferent or better off from some policy change or some piece of behaviour.

Robert Wiblin: But this is actually really stupid. It sounds good, maybe at first blush, but in fact it just means that basically no policy change is acceptable because like every policy change, whether it’s like raising interest rates, decreasing interest rates, keeping interest rates the same. Now raising taxes, lowering them, there’s going to be like some person who’s going to be worse off and would say that it’s no good. So basically it means that like every policy change you could have and every indeed keeping the policy the same is impermissible. And indeed even going out and buying things is no good cause it’d be like the other people who would have wanted to buy the thing and like you drove up the price, they are worse off predictably ahead of time ex-ante. So it’s like there’s no economic actions you can take that actually, or very few that would actually be Pareto improvements. There’s this funny obsession with that and then totally forgetting about it in the actual application.

Will MacAskill: Yeah, absolutely. I never really understood the economists’ fixation with this. I mean it’s definitely, if you’ve got some distribution of wellbeing, then if there’s any Pareto improvement, take it. That’s always great. But it just almost never applies. But it is one of these things where it means you don’t need to make comparisons of wellbeing between people. ‘Cause if it’s good for some people and not worse for anyone else then you know that it’s a good thing to do.

Robert Wiblin: An extra funny thing here is that typically it seemed like they were concerned with ex ante Pareto improvement. So you’d have to think ahead of time, is this like someone who’s foreseeably worse off? But it seems like if you’re not willing to make interpersonal comparisons of utility, what actually matters is ex post Pareto improvements. So if anyone, even like if everyone expects to be better, but then like one person after the fact happens to be worse off. If you can’t do interpersonal comparisons of welfare and like welfare was what you cared about, then that could just invalidate the whole thing. So each person expects to have their welfare go up. But then one person… So we’re playing some kind of lottery here where it’s like either you’re like welfare goes up 10 or it goes down with like 90% probability or it goes down one with like 10% probability. So everyone’s like in on this gamble ahead of time because it raises their expected utility. But then if one person’s welfare does go down one and we’re not willing to say whether that one for that person is like more or less than you know, the hundred points that was gained by other people because we’re just not willing to compare between people. Then it seems like we just can’t say whether it was an improvement or a worsening.

Will MacAskill: Yeah. So it is the case that we know that either way it’s not going to be better. In fact, it’s neither better nor worse because we can’t make the comparison. It’s kind of incomparable. So it’s kind of strange where you’re saying, “Yes, we should do action A over action B, even though I know that whatever happens, for certain, action A is not going to be better than action B and it’s not going to be worse, nor is it going to be equally as good. So yeah, really actually it does seem like they’re blurring together the argument for ex post Pareto where I do have an ordering, I can say that some outcome is better than some other outcome with a different view which might be something like presumed consent where if it’s ex ante Pareto improvement, you can appeal to the fact that well everyone would want it. And then you’ve got some extra principle where you say, if everyone would want this thing, then if I could ask them, then it’s okay. That’s actually quite a different sort of justification from a purely welfarist one.

Robert Wiblin: Okay. Let’s come back to the paralysis argument. That was a big diversion. Okay, so you think the Pareto argument is the most promising. What are the biggest weaknesses you were saying?

Will MacAskill: Well the biggest weaknesses are that you just had to give up on other parts of nonconsequentialist commitment, like saying that, well in general, just like policies that can be Pareto improvements that involve killing one person to save five and so on. They involve you doing all sorts of horrible things.

Robert Wiblin: And that stuff might be okay.

Will MacAskill: Yeah, that stuff would be okay too. So it’d be a big move towards kind of more utilitarian consequentialist thought.

Robert Wiblin: So it seems like another angle that you thought people might make would be to try to play with the action/omission distinction here, to claim that like actually doing things, in fact, these are like not actions. Is that right?

Will MacAskill: Yeah. So you could try and develop an account of the acts/omission distinction, where there are many different accounts such that all of these different long run consequences you have are omissions rather than actions.

Will MacAskill: And one account that would make there be parity between me sitting motionless and me going to the shops is Jonathan Bennett’s account. The conclusion actually is that all of these consequences are actions. So rather than there being some omission that I can do such as sitting at home not doing anything such that that would, you know, not actively kill all these people in the future. Instead it would be saying, well that itself is an action.

Robert Wiblin: Well that seems right. Yeah. It’s tempting to say, well just staying at home, that’s an action too that like has benefited and harmed people. And the mere fact that you were like still doesn’t really get you off the hook. So in fact like everything is prohibited. There’s no one privileged thing that’s an inaction and so like maybe it all just cancels out.

Will MacAskill: That’s right, so that’s the kind of Bennett-esque view where the idea is just something’s an action if out of the space of all possible ways in which you could have moved, it’s quite a small portion of the space.

Robert Wiblin: It’s kind of specific.

Will MacAskill: It’s very specific. At least when you’re explaining whether some event happened.

Robert Wiblin: Isn’t any action going to be also like a very narrow range out of all of the things that you could do, like including sitting still?

Will MacAskill: Well that’s why you need this explanation of an event.

Robert Wiblin: The simplest way of explaining the action is one that says you did this specific thing rather than you didn’t do some other things.

Will MacAskill: Yeah. The simplest way of explaining why the event happened. So if you’re in the shallow pond and you’re drowning, what are all the things I could do that would still result in you drowning? Well I could dance, I could give you the finger. I could just walk away. There’s tons of actions that would still result in that consequence. Whereas if it’s strangling you, there’s like a very narrow range of actions that would result in that consequence.

Robert Wiblin: Suddenly getting up and dancing would stop the strangling or any other action would prevent me from being strangled.

Will MacAskill: Yeah, and so on this view, it is the case that now all your actions are causing huge amounts of harm, so you’re actually in this kind of moral dilemma, everything you do is in some sense very wrong. But perhaps the nonconsequentialist can say, “Well, but that still doesn’t mean that you should engage in paralysis. Perhaps you should just do the best thing”. But because they’ve still got the same ranking between actions, even if all of those actions are inflicting huge amounts of harm. I mean, I do think the more natural thing to say would be just like everything’s wrong then, you’re in a moral dilemma.

Robert Wiblin: Would any of them say that everything’s prohibited, but some things still have better consequences. But then I suppose that they’re just back at consequentialism.

Will MacAskill: Well, you could be a nonconsequentialist and just deny the possibility of moral dilemmas. So you could just say, “It’s never the case that all your actions are wrong. There has to be at least one action that’s the best you could do in the situation you were given”. So, you know, it’s like Sophie’s Choice. You can either kill one child or both children.

Robert Wiblin: So you’re saying there’s degrees of prohibition, perhaps, and then like some things will be less prohibited than others and so that’s the thing that you should do.

Will MacAskill: Yeah, or at least in any situation, if you do the least wrong thing that’s permissible.

Robert Wiblin: I see, okay.

Will MacAskill: So that could be a way out.

Robert Wiblin: Although then that might leave you with like a very narrow range of things that are permissible if you’re like, well, only the thing that’s least prohibited is permissible. It’s removing this nice appealing aspect of deontology in the first place. It gives you like a greater freedom of action that you’re not obliged to do one single thing.

Will MacAskill: Yeah. Perhaps you have to be an altruist or something. But then the second thing is that this account, Bennett’s account, is normally criticized precisely because it makes inaction or being motionless too much like an action.

Robert Wiblin: Yeah, you give this nice example where it loses its appeal.

Will MacAskill: Yeah, exactly. So imagine someone is just lying on a bed and daydreaming disinterestedly and if just a little bit of dust kind of falls in this electrical circuit, it will set off some gadget that will kill somebody. But they just keep lying there. On Bennett’s account it would say, “Oh, that person, he killed the person who was killed by the gadget”.

Robert Wiblin: Because the lying there was such a narrow range.

Will MacAskill: Yeah, because if they’d done any action, it would’ve change the wind, the air currents, and it would have resulted in the dust not landing on the electrical circuit and the person wouldn’t have been killed. But most people intuitively think, “Oh no, that is inaction, that is doing an omission. And certainly intuitively, as well it seems, you know, if I’m just staying home all day, there is something intuitive there where that’s, I’m acting less than if I’m like going out into the world and making all these changes.

Robert Wiblin: Yeah, I gotta say in that case, I do feel like the person who’s just like lying still and allowing the dust to fall is like equally culpable as if they’d killed them. But maybe I’m just like the kind of person who is inclined to say that and that’s not how most people would react.

Will MacAskill: Yeah. I think it depends a bunch on whether like are they straining to… Are they just doing it cause they’re not thinking about it very much or are they really trying. So like intention, whether what someone’s intending to do, I think, affects what our judgements are in these cases too.

Robert Wiblin: So I remember on the episode with Ofir Reich we were both saying that we didn’t really see any intuitive appeal about the act/omission distinction. We weren’t really sure that there was a meaningful distinction there. This whole cottage industry of trying to make sense of act and omission. How can something not be an act? It’s so weird. Like even sitting still, isn’t that an act? So yeah, when you have like so many papers trying to rescue this concept, you do have to wonder like maybe this concept actually doesn’t make any sense.

Will MacAskill: Well, I mean it’s an argument that’s been made which is just, well try and analyse this if you can then at best you get something that’s like this really kludgy looking, really complex principle in a way that maybe you might be skeptical that it’s a fundamental principle of morality.

Robert Wiblin: So, we’ve got this idea of acting to harm people is bad and then we’re going to have to create this big structure of what is an action, which is I guess explaining it is some very narrow set of all of the actions that you could have taken, all of the movements that you could have done. Does that carry the intuition for many people? That that is what they meant by an action and they really do think that whether something was like a narrow part of all of your space of options or not was the key to whether something was like a harm?

Will MacAskill: Yeah, so I’m worried this is incorrect, but I think it’s the case that Bennett who creates this distinction and I think it’s reasonably good as an analysis, then has the view, “Well if it’s this, then obviously that’s not morally important”.

Robert Wiblin: Okay. Oh, right.

Will MacAskill: So actually does have the conclusion like, “Oh, now we’ve analysed it, we see that this just doesn’t make much sense. There’s other things that are important. Like, you know, whether you intended to kill someone that’s important for punishment and so on because I want to punish people who intend to kill others, but if it was an accident and whether you intend to kill someone, well good evidence for that is did you take a particular course of action that is a very narrow set of actions in the space of all possible behaviors you could have engaged in.

Robert Wiblin: It seems like we should be able to contrive an example where it’s like half of all of your actions would cause someone to die and so it’s not that narrow a set. And so in that case you’d say, well that wasn’t an action even though it’s like something that’s very foreseeable and you should just like not allow it to happen.

Will MacAskill: Well there’s a famous case of an uncle who wants to kill their baby nephew because they’ll get an inheritance by doing so. And two variants of the case. In the first case, the uncle comes in and drowns the baby. The second case, the uncle comes in and sees that the child has already actually slipped and is drowning and just waits over the child with their hand, ready in case the child like stops drowning. But then doesn’t need to in fact, the child drowns. And most people tend to think intuitively there’s just no difference there. And that’s another way of putting pressure on the idea that maybe the acts/omissions distinction isn’t actually the important thing here.

Robert Wiblin: Yeah. Interesting.

Will MacAskill: I think there’s one final way out for the nonconsequentialist, which is that if your actions are doing enough good, where that might well be the case, if you’re aiming to benefit the very long run future, then plausibly that is permissible. So it might be that your options boil down to sitting at home or doing as little as possible or instead, going and trying to make the long-run future go as well as possible.

Robert Wiblin: Because then you’re doing so much good as to offset the prohibition.

Will MacAskill: That’s right. So you know, on the one side of the ledger, now I’m not driving just to get some milk, I’m driving to do some important altruistic thing. So the negative is that you’ve killed hundreds of thousands of people. The benefits, you also saved hundreds of thousands of people. And it’s also, you’ve not intended to kill those people. So it’s not a classic case of harm. Like, you know, literally killing one person to save five others or you know, murdering someone you don’t like. And so there’s all the offsetting people that you’ve saved and also potentially this astronomical amount of value or an astronomical amount of good that you’re doing by engaging in longtermist activities.

Robert Wiblin: How very convenient. It’s almost as if you were trying to aim to convince people of this all along.

Will MacAskill: Rob, I don’t know what you’re talking about… I actually think that probably the nonconsequentialist should either just take it as a challenge where they need to alter their account of acts and omissions or perhaps be willing to go one step in the direction of consequentialism and accept ex ante Pareto.

Robert Wiblin: Okay. Yeah, makes sense. It seems kind of whenever in moral theories you try to create kind of asymmetries or like nonlinearities then you’re at risk of someone pointing out this like odd case where that produces super counterintuitive conclusions. Do you think this is like a general thing?

Will MacAskill: Yeah, absolutely. I mean it’s quite striking in moral philosophy how many people, who are consequentialists, are classical utilitarians, which in a sense is a very narrow range within consequentialism. And I think the underlying explanation is that for people who as a matter of methodology are sympathetic to the idea that theories should be simple, but that means to prefer linearities rather than asymmetries and prefer continuities rather than discontinuities. That same principle, over and over again, ends up leading you on a variety of issues to classical utilitarianism.

Robert Wiblin: Yeah. Interesting. Why classical? Why focused on hedons rather than something else?

Will MacAskill: Well, I think that in the case of hedonistic utilitarianism, you have a clear boundary between what things are of value and what things aren’t. Namely those things that are conscious. Independently you would think that’s like a pretty important dividing line in nature. The conscious things and the non-conscious things. If you’re a preference utilitarian, though, well, does a thermostat have a preference for being above a certain temperature? What about a worm, a beetle? Where do you draw the line there? It’s like very unclear. Similarly if you’re an objective list theorist, so you think flourishing and knowledge… I mean, does a plant have knowledge? Like it can flourish, it has health. Why does that not count? And normally it’s the case that you’re inclined to say, “Oh, well, only those entities that are conscious, for them, then you should have whatever satisfies their preferences or this thicker set of goods.

Robert Wiblin: But then we’re back at a hedonistic account. Why don’t we just say the whole thing was hedons all along?

Will MacAskill: Yeah, exactly. Why is it this kind of weird disjunctive thing?

Robert Wiblin: If you have consciousness, then a bunch of these like non-conscious facts matter. That’s like less intuitive than if you have consciousness then the consciousness matters.

Will MacAskill: Yeah, exactly.

Robert Wiblin: Yeah, interesting. Okay.

The case for strong longtermism [0:55:21]

Robert Wiblin: So let’s just talk quickly about this other paper you’ve been working on with Hilary Greaves now called “The Case for Strong Longtermism”. We’ve talked about longtermism a lot on the show and no doubt it will come up again in future. So we probably don’t want to be rehearsing all these arguments again or our listeners will start falling asleep. Is there anything new in this paper that people should maybe consider reading it to learn?

Will MacAskill: Yeah, so I think the paper, if you’re already sympathetic to longtermism, where we distinguish longtermism in the sense of just being particularly concerned about ensuring the long term future goes well. That’s analogous with environmentalism, which is the idea of being particularly concerned about the environment. Liberalism being particularly concerned with liberty. Strong longtermism is the stronger claim that the most important part of our action is the long-run consequences of those actions. The core aim of the paper is just being very rigorous in the statement of that and in the defense of it. So for people who are already very sympathetic to this idea, I don’t think there’s going to be anything kind of novel or striking in it. The key target is just what are the various ways in which you could depart from a standard utilitarian or consequentialist view that you might think would cause you to reject strong longtermism, and we go through various objections one might have and argue that they’re not successful.

Robert Wiblin: Are there any kind of new counterarguments in there to longtermism?

Will MacAskill: I think there’s an important distinction between what philosophers would call axiological longtermism and deontic longtermism. Where, is longtermism a claim about goodness, about what the best thing to do is, or is it a claim about what you ought to do? What’s right and wrong? So if you’re a consequentialist, those two things are the same. The definition of consequentialism is that what’s best is what’s like–

Robert Wiblin: Yeah. No wonder this distinction has never seemed that interesting.

Will MacAskill: Yeah. But you know, you might think–

Robert Wiblin: Something could be good but not required or–

Will MacAskill: Yeah. So perhaps it’s wrong for me to kill you to save five, but I might still hope that you get hit by an asteroid and five are saved, because that would be better for five people to live than one person to live, but it’s still wrong to kill one person to save five.

Robert Wiblin: So axiology is about what things are good and the deontology thing is about like the rightness of actions?

Will MacAskill: Yeah. Normative theory or deontic theory.

Robert Wiblin: Okay. So what’s the two different longtermist things here?

Will MacAskill: So just axiological strong longtermism and deontic strong longtermism.

Robert Wiblin: Again, it’s just about like consequences versus like what actions are right.

Will MacAskill: Exactly.

Robert Wiblin: Alright. So we’ll stick up a link to that paper for anyone who wants to read it.

Longtermism for risk-averse altruists [0:58:01]

Robert Wiblin: Let’s move on to another feistier paper that you’ve been working on with, in this case, both the previous coauthors, Hilary Greaves and Andreas Mogensen, called “Longtermism for risk averse altruists”, which I guess is in this like long list of papers you’re writing which just makes me feel like I was right all along. These incredibly elaborate things that you’re going through to just make me feel very smug about how I was on the ball early on.

Will MacAskill: I mean we’ve really not been aiming to just write a lot of papers that are defending longtermism. We all know the total utilitarian case for longtermism, what happens if you modify some of these premises. And in some cases you might think, “Oh this really does make a difference”. So risk aversion is one case. I think it’s quite intuitive or certainly something people say where… Let’s say you’re deciding between funding some existential risk reduction intervention or funding some global health program and you might think, “Well I know that I can do some amount of good when funding the global health program whereas this existential risk things seems very uncertain”. You might acknowledge if you are successful, the consequences would be like really really big and it’d be really great, but it’s just so unlikely that I want to have a safe bet and I’m risk averse.

Robert Wiblin: Yeah. I got this exact objection in an interview a couple of months ago and it was like, I mean I tried to explain why it was wrong, but it’s like it’s not super easy in words.

Will MacAskill: No, I mean the paper itself, even though the core of the paper came from me, I got asked about it in Cambridge and I was trying to explain it and I had to say, “Yeah, sorry, you’re just going to have to look it up cause I’ve forgotten myself”. So it actually does get quite tricky quite quickly.

Robert Wiblin: So the setup here is that there’s some people doing something that seems like very reliable and safe, like distributing malaria nets, and you’ve got these other people trying to like prevent nuclear war and there’s this sense in which isn’t it such a risky thing to do with your career to try to prevent nuclear war cause you’re almost guaranteed not to succeed whereas you will distribute the bed nets and some lives will be saved. On the other hand you can flip it around and say well there’s this other intuition that if you’re risk averse, well shouldn’t you be reducing big risks? And then we’ve got this bit of intention and intuitions and I guess you’d try to more rigorously define risk averse about what and then like what does that actually lead to?

Will MacAskill: Terrific. So the first thing to distinguish is: is what I care about myself making a difference or is what I care about that good things happen? So if what I care about is myself making a difference, then absolutely. A standard account of risk aversion would say that you prefer the guarantee of saving one life. Let’s say it’s a 1% chance of saving 110 lives from the nuclear war example. Obviously it’s a smaller probability but a larger amount of good. However, as an altruist, should you care about that you make the difference and no, I think–

Robert Wiblin: It’s like, “Yeah, hundreds must die so that I can know that I made a difference”. It’s kind of the classic donor-focused altruism.

Will MacAskill: Yeah, exactly. I mean it’s quite antithetical to what effective altruism is about. I think in “Doing Good Better”, I mentioned the example of a paramedic is coming to save someone’s life. They’re choking or something. They need CPR and you push them out of the way and you start making the difference yourself. So I mean, in order to make this clearer, just imagine you’re just going to learn about one of two scenarios. In the first scenario, some existential risk happens, but you saved dozens of lives yourself. In the second scenario, no existential risk happens and you don’t do any good yourself and you’re just going to find out which of those two things are true. Which should you hope is the case? Well obviously you should hope that no existential catastrophe and you not doing anything is the thing that happens, but if that’s what your preferences are, then your preferences aren’t about you yourself making the difference. In fact, what you actually care about is good stuff happening.

Robert Wiblin: Yeah. Although you might say there’s some trade off, where it’s like people would be willing to accept a somewhat worse world in order to have had a bigger impact themselves. I mean maybe people might concede that they have some like selfish desire to like be able to reflect on their own life and feel proud about what they did, even if the world is worse.

Will MacAskill: Okay, great. Yeah, so it could be that you’ve got what we might call impure altruism. So if in part what’s driving me is a meaningful life or something and maybe that’s tied to how much good I actually do and there’s diminishing returns to the meaning I get from doing more good. So a life where I saved 10 lives is perhaps just as meaningful as a life where I have saved a hundred lives or very almost as much.

Robert Wiblin: We’re thinking of good done like income or something like that and it’s like, I want to make sure that I make a decent amount of income and I want to make sure that I do some amount of good.

Will MacAskill: Yeah, and then again, I do want to say if people are going out and actually using their money for good things and they’re like, “Yeah, well it’s a mix of motivations”, that’s like fine. It’s the case for all of us. Exactly. But, if we’re trying to defend this as, “No, this is actually the altruistically justified thing”. That’s different if we’re doing the moral philosophy thing. That’s a different argument. Yeah. But then it seems very hard to justify the idea that no, altruistically, what you should care about is how much good you personally do. Instead, what you should be caring about is just how much good gets done.

Robert Wiblin: Yeah. Is there anything that could be said in defense of that view from a philosophical stance? I suppose non-realism or nihilism or just like giving up.

Will MacAskill: I mean, well then if it was just nihilism then there’s no case where you ought to do something. Yeah, I think it’s pretty hard. I don’t know anyone who’s defended it for example.

Robert Wiblin: I see. Okay. So we’re going to say if what they were saying is that they’re risk averse about their personal tangible impact themselves, we’re going to say, “Well, that’s all well and good, but it’s not actually something that’s defensible in moral philosophy”. What about if they have a different understanding of risk aversion? So they’re thinking something more about like risk aversion about the state of the world perhaps.

Will MacAskill: Yeah. So now, instead, and this is the perspective I think that you should take as a philanthropist, is this impartial perspective. So you’re just looking at different ways the whole world could go, and I’m now risk averse with respect to that. So for the whole world, I would prefer a guarantee of that world getting to, let’s say, a hundred level of value. A hundred units of value, whatever that unit is, rather than a 50-50 chance of 210 units of value and a 50% chance of zero units of value. And that again is a perfectly coherent view. It’s not a utilitarian view, but it’s perfectly coherent. And there’s a couple of different ways you could cash it out. So you could say that, well, that’s because value has diminishing returns. So just in the same way as money has diminishing returns for you, somehow total amount of value it also has diminishing returns. People often don’t like to say that.

Robert Wiblin: I mean it has some odd consequences because then if there’s a flourishing alien civilization somewhere far away, I guess the world matters less because it’s like they’ve added all this welfare to the universe and so now all of our actions have just become less morally significant. I don’t think that that’s very intuitive.

Will MacAskill: That’s right, yeah. So the technical term for this is that it’s non-separable. So in order to decide what I ought to do, I need to know not just about the thing that’s right in front of me, but also just how many aliens are there, how many people in the past?

Robert Wiblin: Yeah, you might find that there were more people in the past and this then makes you more risk averse or something.

Will MacAskill: Yeah, it would make you take actions that are more risk averse seeming.

Robert Wiblin: Yeah. I guess it depends on the shape of the risk averse curve.

Will MacAskill: So that’s one thing you can do. I mean, but more promising is a different decision theory which just cares about risk. So again I’m going to hope that I cite this correctly, but I think it’s “Rank-dependent utility theory” by Quiggin and then a view that is formally the same but with a different philosophical interpretation, which is Lara Buchak’s “Risk-weighted expected utility theory,” where the idea is just you care about risk and so each little increment of probability onto a possible outcome doesn’t count the same. So perhaps you take the square of the probability when you multiply that by an outcome. But here again, you don’t necessarily end up with the conclusion that you should prefer the bed nets over preventing nuclear war. And that’s because there’s two kind of sources of uncertainty that go into whether you’re going to do good by trying to prevent a nuclear war.

Will MacAskill: Like there’s two ways you can fail to do that, to have an impact there. One is if there was never going to be a nuclear war; in fact we were going to achieve this glorious future whatever you did. A second way in which you could fail is if there was going to be a nuclear war, but your actions are ineffective towards it. And if you’re just a standard expected utility maximizer, that difference doesn’t matter at all. But it does, surprisingly, if you’re risk averse. And the way to see that is because supposing we get to this glorious future, the future’s really good, and then it also has this extra benefit which is that someone’s life in a poor country was saved.

Will MacAskill: Well, you’re adding a bit of good onto what’s already an extremely good outcome and so that doesn’t contribute very much.

Robert Wiblin: Ah, okay. Nice! I haven’t read this paper listeners if you couldn’t tell, perhaps.

Will MacAskill: I thought Rob was faking it. You looked so interested.

Robert Wiblin: I was just like the penny slightly dropped.

Will MacAskill: And I thought is Rob acting or not?

Robert Wiblin: You’d never be able to tell, Will.

Will MacAskill: Yeah. Whereas if instead what’s happening is that we’re almost certainly doomed and it’s just that you could make the difference, then when you add that little bit of benefit you’re actually adding on to a bad world where we have very little total value and that contributes a lot.

Robert Wiblin: So it’s more valuable to save the life in the case where extinction is highly probable because the world as a whole is worse and so saving that person’s life is adding more moral value in some sense.

Will MacAskill: Yeah, exactly.

Robert Wiblin: Interesting. Okay. Then how does that play out? I guess this is seeming like it’s going to be a bit of complicated math here to see exactly what this pans out to.

Will MacAskill: Yeah, it means that because in any realistic situation, if I’m unsure about whether I’m going to have an impact by trying to prevent a nuclear war, I’m going to be unsure for both reasons. Both because maybe there’s just not going to be a nuclear war, but also maybe anything I do is going to be ineffective and we do do a little bit of just maths. Throw in some plausible numbers. So if you have quite extreme risk aversion, which is where rather than multiplying the probability of an outcome by its value to contribute to expected value, you take the square of the probability and multiply it by a value. So that’s actually quite an extreme risk averse view.

Will MacAskill: Then if you think it’s more than 50% likely that we’ll get some really good future. If you’re risk averse that ends up favoring extinction risk reduction. But yeah, it isn’t striking that risk aversion can make you favor the nuclear war. And then one thing I should say on this is that this has all been premised on the idea that the future is just either neutral in value if we continue to exist going into the future, it’s either neutral or positive.

Robert Wiblin: It seems like it should be stronger–

Will MacAskill: But if we include the possible negative outcomes, the case for risk aversion caring about longtermism gets stronger.

Robert Wiblin: But inasmuch as there’s a possibility that the world as a whole is just really bad then doesn’t that make the incremental benefit of saving one person’s extra life more valuable because I suppose it depends on whether the distinction’s between like the best and worst case that you can try to bring about in the long term is very large.

Will MacAskill: Yes. So if it was the case that you are just now concerned about either reducing extinction risk or saving someone from malaria then you’re right that that increment of benefit would count for more if within this terrible world in the future, and so that probably will favor bed nets over reducing extinction risk for sure. But there’s something else you could probably do, which is try and reduce the chance of terrible futures which is still the longtermist thing to do. And that’s going to be overwhelmingly important. Like any tiny decrease in variance of the value of the future or decrease of the worst case scenario is going to be overwhelmingly important and more important the more risk averse you are.

Robert Wiblin: I see. Okay. So this actually wasn’t the argument that I, for example, made when I was asked this question a couple of months ago. The reason that I gave for saying that risk aversion doesn’t favor the bed net distribution was just that this person is ignoring almost all of the effects of the actions and so they’re thinking, “Oh it’s safe cause I just saved a few lives”. But if they think about all of the ripple effects, all of the longterm indirect effects of that action, then in fact what they’ve done is just incredibly unpredictable and maybe it has very good effects or very bad effects. In fact, it’s a very risky action in a sense. Just as trying to prevent nuclear war is very risky because it’s such a high chance that you’ll fail and there’s all then this chance to have a big benefit. Do you see where I’m going with that?

Will MacAskill: So I think when I heard you say that it was, I remember thinking it was just the argument or response that I would also make. But I think it’s a different issue. So because trying to reduce nuclear war also has all of these long-run unpredictable effects. So they both are risky in that sense.

Robert Wiblin: It’s like whether they create good or bad outcomes for both of them is extremely hard to foresee and there’s a huge distribution and so they’re both very risky and it’s not the case that one is less risky in that sense than the other, and so that kind of cancels out. So just go for the highest expected value.

Will MacAskill: Well I was saying they’re both very risky, but I think you can still say that. So let’s say there’s the foreseeable effects and the unforeseeable effects. The foreseeable effects are one life saved vs. one in a million chance of huge value and then you can say, “Well there’s a certain amount of riskiness that comes from the unforeseeable effects, the same from the nuclear war or from the bed net distribution, but then we can just isolate the foreseeable effects which are less risky for the–”

Robert Wiblin: Yeah. Isn’t there a sense in which the work on the nuclear war thing is less risky cause there’s a 999,999 out of a million chance that you’re going to accomplish nothing and that nothing will happen.

Will MacAskill: But you wouldn’t, for any given action, you won’t accomplish nothing. You’ll have all of these unpredictable effects.

Robert Wiblin: Yeah. Maybe I’m just not thinking about this right. But it seems that kind of cancels out to me, but I guess you’re saying that the unpredictable effects are even larger in the nuclear case.

Will MacAskill: Well no, I’m saying for both the bed net case and the nuclear case, for anything we do and no matter what pans out, we’ve got this stream of unpredictable effects and let’s say we don’t have any reason for thinking that they’re going to be more or higher variance unpredictable effects from bed nets than nuclear. So in all cases we have this stream of unpredictable effects and then no matter what way the world is, we get like plus one benefit of life saved from bed net distribution across all different possible states. In the nuclear war case, we get plus 1,000,00 in the 1 in a million case and zero in all the others, and that is just strictly adding.

Robert Wiblin: I see. Yeah. Maybe there’s a sense in which with so much uncertainty or such a wide distribution to begin with, the incremental riskiness or the incremental variance that’s added by the foreseeable effects doesn’t feel as large. Maybe that’s something that’s going on?

Will MacAskill: Yeah. I’m now trying to remember exactly what the question asked was because it seems if the argument is, “Oh well I want to do what I’m confident does good”, which I said I’ve been cashing out the underlying intuition in one way, which is in terms of risk aversion. But yeah, if instead it’s, “No, I want to do something where I know I’m going to do good”, well then that argument doesn’t work–

Robert Wiblin: “Because there’s this high chance that they would do harm accidentally.

Will MacAskill: Yeah, exactly. There’s just so many effects and picking out one aspect does not reduce the uncertainty very much at all and I think that’s intuitively quite important. Like perhaps also from a nonconsequentialist perspective as well where we are responsible for all of those effects and actually if we’re just saying, “Well yes, we care about long-run future, we hope it goes well, and yes also these activities that have short-run benefits will have very long-run effects, but no, we’re not going to worry about that”. That seems wrong.

Are we living in the most influential time in history? [1:14:37]

Robert Wiblin: Alright. Let’s push on to this very interesting and kind of provocative blog post you wrote on the effective altruism forum back in September called, “Are we living in the most influential time in history?”. In the post, you laid out a bunch of arguments both in favor and against the idea that we’re living at a particularly important time in history and I guess kind of ended up concluding that people might, at least in the effective altruism community, might be really overestimating the chance that this particular century is especially important in the scheme of things.

Robert Wiblin: So I thought this post was one of the best posts that’s been put on the effective altruism forum. Not to flatter you too much, Will. But in addition to that, I thought the comments section was amazing. There were just half a dozen comments where I’m like, these could be posts in their own right with new insights and then great responses. And people were also being extremely polite as well, even though they were disagreeing quite strongly.

Will MacAskill: Yeah, I loved it. I was really happy I managed to get Toby out of the woodwork and Carl too.

Robert Wiblin: Yeah. So I guess this is a pretty convoluted issue. We might get a little bit tangled up, but I guess it would be good to, I guess talk about what you present in the post and then maybe work through some of the top comments that I thought were also very insightful and I guess, how you respond to that and where things stand now?

Will MacAskill: That sounds great.

Robert Wiblin: Maybe first, what do you mean by ‘the most influential time in history’, and why does this question matter?

Will MacAskill: I do think I’m running together two slightly different ideas that I think are worth picking apart and if I wrote the post again, I probably would. So one is just in some intuitive sense of importance, don’t even really need to define it, but on certain views that are popular in the effective altruism community, like the Bostrom-Yudkowsky scenario that’s closely associated with them — I don’t want to claim that they think it’s very likely. On that view, there’s a period where we develop artificial general intelligence that moves very quickly to superintelligence and either way, basically everything that ever happens is determined at that point where it’s either the values of the superintelligence that then it can do whatever it wants with the rest of the universe. Or it’s the values of the people who manage to control it, which might be democratic, might be everyone in the world, it might be a single dictator.

Will MacAskill: And so I think just very intuitively it sounds important. Intuitively, that would be the most important moment ever. And in fact there’s two claims. One is that there is a moment where almost everything happens, where most of the variance of how the future could go actually gets determined by this one very small period of time, and that secondly, that that time is now. So one line of argument is just to say, “Well, it seems like that’s a very extraordinary claim”. We could try and justify that. Then there’s a question of spelling out what extraordinary means, but insofar as that’d be a really extraordinary claim we should have low credence in it unless we’ve got very strong arguments in its favor. Then there’s a second argument or second understanding of influential that is very similar, but again, different enough that maybe it’s worth keeping separate, which is just the point at which it’s best to directly use our resources if we’re longtermists, where that’s just how does the marginal cost-effectiveness of longtermist resources vary over time is the question.

Will MacAskill: And here again, the thought is, well, we should expect that to go up and down over time. Perhaps there are some systematic reasons for it going down. Perhaps there’s some systematic reasons for it going up. Either way it would seem surprising if now was the time where most longtermist resources are most impactful and what that question is relevant to is that it’s one part of, but not the whole of, an answer to the question of should we be planning to spend our money now doing direct work or should we instead be trying to save for a later time period, whether that’s financial savings or movement building.

Robert Wiblin: So I guess we can imagine centuries where I suppose they’re very intuitively important, but they are not important in the second sense because say there was nothing that could be done. So a lot of uncertainly gets resolved, but like an extra person couldn’t have made any difference.

Robert Wiblin: Say maybe we’re like definitely going use a random number generator to determine the future. There’s nothing you can do to stop that from happening. So a lot of uncertainty is resolved when the random numbers are generated.

Will MacAskill: Yeah, exactly. Or the second thing as well is just maybe no one could do anything about it. On the second definition, I also build in that maybe we just don’t know what to do. We’re just not reliable enough. So perhaps at the turn of the agricultural revolution, perhaps there were certain things you could have done that would have in principle, that would have had a very long run influence, but no one at the time would have been able to figure that out. So the argument would go. On that second sense then I would not count that as being influential either.

Robert Wiblin: Okay. In the post, you use the term hinginess I guess to describe this second sense of importance where it’s like a person can make a big difference if they act at that time, but you haven’t used that word yet. Is that cause you’re reluctant to kind of pin people onto this terminology that maybe we want to get rid of?

Will MacAskill: Yeah, I think we haven’t decided on terminology now. In one of the comments, Carl Shulman objected to the term hinginess because perhaps it just sounds a bit goofy, I guess. And so instead perhaps we should say leverage or something.

Robert Wiblin: Pivotalness or pivotality? Yeah.

Will MacAskill: Pivotality doesn’t really get across the idea that maybe we are really at a pivotal time, we just don’t know we are, we just aren’t able to capitalize on that.

Robert Wiblin: Okay. Alright. So to avoid entrenching some language, maybe we’ll just use “importance” in this case to describe the time when one extra person can make the biggest difference for this conversation.

Will MacAskill: Okay.

Robert Wiblin: Cause I think we need some term.

Will MacAskill: Okay.

Robert Wiblin: Cool. Quite a lot of different people have thought that it could be the case that this century could well be one of the most important in that sense. I guess Toby Ord is about to publish this book kind of making that claim. I guess Derek Parfit has kind of suggested that might be the case. It’s not only people in effective altruism. Lots of other people make the argument that this century could determine everything. It’s like climate change, war between US and China, we have nukes now. It’s like there’s some sense in which it’s kind of intuitive. Do you want to lay out the arguments both in favor and against the hypothesis that the next hundred years is going to be especially important or an especially good time to act to change the long-term?

Will MacAskill: Yeah. Terrific. One clarification I’ll make as well is that this is not an argument for saying for example that existential risk is low. So the argument from Parfit and Sagan and others is, “Well, we’ve developed nuclear weapons that’s ushered in a new time of perils where extensional risk is much higher”. I could actually take objection to the argument from nuclear weapons, but I’ll put that to the side.

Robert Wiblin: We’ll come back to that.

Will MacAskill: But, this would not be a particularly exceptional time if existential risk goes up and then stays up for a very long time for many, many millennia or something. Or if it went up and went up even further.

Robert Wiblin: Or it goes up and there’s nothing anyone can do.

Will MacAskill: Or if it goes up and there’s nothing anyone could do would also be the case. So it needs to be the case for this to be a particularly influential time and Parfit calls it the “hinge of history”. That’s why hinginess has come up. It needs to be the case that reducing existential risk is particularly high leverage now compared to other times. And so I give a couple of arguments against this. One is that just on priors, we should think it’s extremely, again, especially if we assume if we’re successful there will be a very large future, extremely large numbers of people in the future, then it would be really remarkable. A priori, it would seem extremely unlikely that we happen to be the people out of all this time that are so influential, are in fact the most influential people ever. Secondly then is how good is the quality if we’ve got this low starting prior, how good is the quality of the evidence to move us from that. And there there’s kind of two arguments, two related issues.

Will MacAskill: One is just we might intuitively understand as the quality of the arguments, where it’s not like we have empirical observations for this. It’s not like we have deep understanding of some physical mechanism that should really allow us to update very greatly. Instead it’s kind of generally going to be informal arguments and informal models of how the future will go. Then combined with the fact that I think we shouldn’t really expect ourselves to be particularly good at or particularly reliable at reasoning about things like this. We certainly don’t have positive evidence for thinking that people can reason well about something like this. And that then makes it hard if you’ve got this very low prior to have some correspondingly large Bayes factor that is a kind of update on the basis of argument that would move us to having say a 10% credence or more that we’re living at the most influential time ever.

Robert Wiblin: Yeah. Okay. So to recap that, you’re saying the future could be very long, so there’s a lot of potential future people, lots of potential future centuries. What are the odds that this one happens to be the most important out of all of them? We should guess that it’s low unless we have really good reasons to think otherwise and then you’re like, well, do we have this really strong evidence? It’s like, well we do see we do have a bunch of arguments for this, but they’re not so watertight that we couldn’t explain them seeming compelling to us as just being an error on our part. But in fact it seems like this century is especially important, but that’s just an illusion because we’re not very good at reasoning about this and we always have this very plausible alternative explanation. Whenever it seems like we’ve made a good argument that this century is really important, that we just don’t know what we’re talking about and we’re just mistaken.

Will MacAskill: That’s right, and if you’ve got a starting very low prior and then even somewhat unreliable mechanism to move you from that prior, well you don’t end up moving very much because rather than you being at this extremely unlikely time, it’s much more plausible, this kind of mundane explanation that actually just, we’ve made some mistake along the way. So you know, I give an example of suppose I deal a pack of cards out and you see a particular sequence. If it’s just a kind of random seeming sequence, you should update all the way from one in 52 factorial, which I think is like one in 10 to the 68 or something all the way up to high credence in that and that’s amazing.

Will MacAskill: So you really can make huge updates from some very low priors. But if I deal a set of cards in perfect order, you should conclude that well probably the pack of cards was not well shuffled to begin with. Probably it wasn’t shuffled, in fact. So you kind of question the underlying starting assumptions.

Robert Wiblin: Yeah. You’ve slightly jumped the gun here though. What are the object level arguments that people make that this century is especially important so we can maybe assess, “Do we have a really good grip on them? Are they compelling evidence?”.

Will MacAskill: Great. So I think there’s two different sets. So I distinguish between inside view arguments and outside view arguments and the inside view arguments are, for example, the view associated most prominently with Bostrom-Yudkowksky but also more widely promoted, that we will develop AGI at some point this century and that’s the most pivotal event ever. Perhaps because AGI very quickly goes to superintelligence and whoever controls superintelligence controls the future. A second way in which the present time might be particularly influential is if we’re at this time of perils. So we’re now at the point in time where there’s sufficient destructive power that we could kill ourselves as in render humanity extinct. But before the time where we get our act together as a species and are able to coordinate and reduce those risks down to zero. So those are two… I call them inside views. The distinction isn’t necessarily very tight. Then there are a bunch of outside view arguments too. So again, let’s just assume that the future is very large or at least if we’re successful, the future is very large in a sense that there are vast numbers of people in the future.

Will MacAskill: Well, we do seem to be distinctive in lots of ways then. We’re very early on. We’re in a world with very low population compared to future populations. We’re still on one planet. We’re at a period in time where some people are aware of longtermism, but not everybody in a kind of Goldilocks state you might think for having an influence. So here are a whole bunch of reasons why even without considering any particular arguments, your prior shouldn’t be extremely low. It should be kind of considerably higher. And on that latter side, so one bit of confusion I think in the discussion was what exactly we were using the word “prior” to refer to, where I’m referring to your ur-prior, your fundamental prior when we do that.

Robert Wiblin: That’s like before you’ve opened your eyes. Before you’ve seen kind of anything?

Will MacAskill: Yeah, before I’m even aware that I’m on Earth. I mean the way I was thinking about it, it’s a function from if I believe that there’s going to be a billion people ever, then I believe there’s one in a billion chance of being the most influential person. Similarly, if I believe there’s going to be a hundred trillion people, I believe there’s a one in a hundred trillion chance th