Transcript

Rob’s intro [00:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.

If not for Toby Ord I probably wouldn’t be doing what I’m doing today. It’s he who recruited Will MacAskill, and the two of them then went on to get the ball rolling on the effective altruism and longtermism movements as we see them today.

I love Toby Ord’s new book, The Precipice: Existential Risk & The Future of Humanity. It’s now the first thing I’ll be giving people if they want to read a book that explains what I do and what 80,000 Hours recommends.

The book is out March 5th in the UK and March 24th in the USA, and in audiobook form and we’ll link to places you can get it in the show notes or you can find out at theprecipice.com.

Even if you think you know a lot about the ways civilization could go off the rails or how it might flourish more than we ever thought, there’s a tonne of new stuff to learn in this book — scientific details about each scenario, and new methods for sensibly analysing them.

Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected we got a great interview out of him and barely even had to work for it.

We start by talking about the ways things could go badly but get to how things could go amazingly at the end of the episode.

As for all our long interviews, we have chapters which you can use to skip to the section of conversation you want to hear, if your podcast software supports chapters. For example, this episode is coming out during a moment of serious panic about COVID-19 so perhaps you’d like to skip to the section called ‘biological threats’ that comes halfway through.

Finally, people have loved Arden’s contributions to the show so far and due to popular demand she’s back for this interview too.

Alright, without further ado, here’s Toby Ord.

The interview begins [00:02:15]

Robert Wiblin: Today, I’m speaking with Dr. Toby Ord, a moral philosopher at Oxford University. His work focuses on the big picture questions facing humanity. His early work explored the ethics of global health and global poverty and this led him to create an international society called Giving What We Can whose members have pledged over $1 billion to the most effective charities that they can find. He was also influential in the creation of the wider effective altruism movement and as a result has been a trustee of 80,000 Hours since its foundation. Toby’s advised the World Health Organization, The World Bank, The World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, the Cabinet Office, and the Government Office for Science. But today he’s here to discuss his new book, “The Precipice: Existential Risk and the Future of Humanity”, which makes the case that protecting humanity’s future is the central challenge of our time. Thanks so much for coming on the podcast, Toby.

Toby Ord: It’s great to be here, Rob.

Robert Wiblin: And we’re also joined again by my research colleague here at 80,000 Hours, Arden Koehler who will be finishing her PhD in philosophy at NYU any day now.

Arden Koehler: Rob, don’t say that out loud on air! It’s great to be here. I really enjoyed the book a lot. I liked how it combined a sort of rigorous empirical analysis of different risks that we face with a case for why we should take this stuff seriously.

Toby Ord: Thanks.

Arden Koehler: So usually we ask guests what they’re working on now and why they think it’s really important. But since you have just finished this book, I guess we know what you’ve been working on. So why don’t you tell us a little bit about the project and why you think it’s important?

Toby Ord: Sure. The book is called, “The Precipice: Existential Risk and the Future of Humanity”, and it’s about how humanity has been around for 2000 centuries so far and how long and great our future might be and how soaring our potential is, but how all of this is at risk. There have been natural risks of human extinction: things like asteroids that could potentially wipe us out as they have many other species. And there’s been this background rate of such risks. But with humanity’s increasing power over time, our rise of technological power, we’ve reached this stage where we may have the power to destroy ourselves leading to the destruction, not only of the present, but of the entire future and everything that we could hope to achieve. And this is a time that I call “The Precipice” and the book is therefore looking at extinction risks and other forms of existential risk.

Toby Ord: But it’s doing so because I’m inspired by the hope for the future and what we might be able to achieve if we can make it through this time. So the book covers the history of humanity, the potential future of humanity, the questions about the ethics of why we might care so much about our future and safeguarding it, and then also gets into the science behind the risks. It gets into interesting methodological and technical tools for thinking about these risks and then also policy questions and what individuals might be able to do or what humanity might be able to do about these risks and then what we could achieve if we get through this time.

What Toby learned while writing the book [00:05:04]

Robert Wiblin: So you and I spoke about the book or, I guess, especially the philosophy and ethics part of the book… I guess it was two and a half years ago, back in 2017; I think it was in episode six. You must have spent quite a lot of the last two and a half years researching for the book. Is there anything that you changed your mind about that’s significant?

Toby Ord: Yeah. One of the main areas concerns climate change. I thought this was the kind of thing where one could show fairly clearly that climate change can’t be much of an existential risk, that it could be absolutely terrible and something that’s very important to avoid and potentially with a risk of global catastrophe, but that it wouldn’t really pose much existential risk and over time I think that it’s harder to show that than I’d thought. We can get into that more later.

Robert Wiblin: Yeah, we’ll get back to the climate change section later. Is there anything that you learned that particularly surprised you that came out of left field?

Toby Ord: Yeah, actually one such thing was when looking at asteroid risk and the different ways we have of diverting asteroids from hitting the Earth, I was very surprised to learn that none of the methods actually applied to asteroids at the size scales that would threaten existential catastrophe. All of the conversation about gravity tugs and reflective methods, or nuclear methods and things, were all about asteroids that would cause local catastrophes rather than the global ones.

Robert Wiblin: Oh, so those other ones are just too big for those methods to work?

Toby Ord: Yeah.

Robert Wiblin: We’d need something else?

Toby Ord: We would need something else. That was a little bit alarming because a lot of the general public interest as well as my particular interest about the risk from asteroids or comets, concerns the ones that could cause an extinction threat. And yes, I was quite surprised to learn that the deflection techniques don’t really apply to those.

Arden Koehler: Yeah. Was it just because it was easier to try to address these smaller threats and so they went for that because it was somewhat close by in genre to the larger threats?

Toby Ord: Yeah, I mean I don’t really blame the scientists or engineers for this. I think they were just trying to work out how to deflect asteroids, and it turns out it’s harder to deflect the ones that are bigger. The ones that, for example, are 10 kilometers across are 10 times the size of the ones that are one kilometer across. Well, at least we say 10 times the size. But they’re a thousand times the volume and a thousand times the mass. And so techniques that involve changing the momentum of this thing is a thousand times harder. So it is just extremely difficult to do this. And also, the risk, or at least the probability of being hit by the bigger ones, is really much smaller than of these ones that could cause local catastrophes. So often people who are not particularly focused on existential risk see that one other thing might be 10,000 times as likely and then they think it’s not at all unreasonable to spend a lot of effort trying to focus on that situation. But there’s a lack of communication about the fact that it only applies to one of these size ranges.

Robert Wiblin: Okay. Just to map out where I think the conversation will go: the rough structure; I think first we should talk about a bunch of specific existential risks. So nuclear war, climate change, that kind of thing. Then maybe zoom out and talk about existential risks as a whole. Then maybe we can push back a little bit on the idea that existential risk is particularly high or that we’re living in a particularly important time in human history: see what the counterarguments are. And then maybe close by thinking about what a good future might actually look like and what the social barriers might be to getting there. Is that good?

Toby Ord: Yeah.

Arden Koehler: Sounds good.

Estimates for specific x-risks [00:08:10]

Robert Wiblin: All right. So I wanted to start by going through this menagerie of potential threats that we face because I think this is something that readers might potentially really love about the book even if they know quite a bit about existential risk as a whole and I imagine some listeners are fairly familiar with the general topic by this point. Yeah, really go through this list quite methodically. Just describing the science and the history behind each one of them and trying to figure out how much of a threat they pose in reality and I guess in a few times actually kind of throwing a bit of cold water on them. Debunking them to some extent or at least saying the risk is less than people might think, so it’s not all doom and gloom.

Robert Wiblin: Even though I know a lot about these topics, or at least a few of these, I still found that there was lots of fascinating little facts that I’m going to be throwing out in conversation over the next couple of years I imagine. Hopefully people won’t realize that I’m just cribbing from your book. So yeah, you’ve got this really beautiful summary table. There’s a bunch of nice figures and tables in the book, but maybe my favorite was this table 6.1, where you’ve just got this column which has got a list of all of the different risks and then ‘Toby’s best estimate’ of the chance of it causing a human extinction within the next hundred years and a couple of figures are, I guess… Yeah, asteroid or comet impact: there’s about one in a million chance in the next century. Super volcanic eruption: 1 in 10,000. We’ve got stellar explosion: so supernovae and something wacky like that would be one in a billion. But then we’ve got bigger risks that Toby thinks from anthropic risks. Like nuclear war is one in a thousand. Climate change is one in a thousand. Natural pandemics is 1 in 10,000. Engineered pandemics is a lot higher at 1 in 30. And we’ve got unaligned artificial intelligence which is way out there at 1 in 10. And then we’ve got kind of everything else, which I guess is about 1 in 30 or 1 in 20, which I guess then, if you add all these things together, it comes to a total chance of us not making it through this century of one in six, which is an interesting, potentially alarming figure depending on what you thought before. Yeah. Do you want to talk through the one in six figure? How much would you stand by that? How seriously should people take that?

Toby Ord: Sure. So with all of these numbers I should say that, when I go through the risks in detail and the science behind them, I try to give the scientific numbers. The numbers you can stand by. So, for example, the risk that the asteroid experts say is the probability of the Earth being hit by an asteroid greater than 10 kilometers across within the next hundred years. These types of numbers. But then there’s often a lot of uncertainty about what actually would happen if we’re hit by an asteroid of that scale or if one was detected, would we be able to work out some way of deflecting it and could we survive? What if we stockpiled food? What if we did this and that? And so there’s a lot of uncertainty that comes in for all of them. Even something as well characterized as asteroid risk. So all of the numbers that I give in this table, is a bit where I’ve tried to kind of cordon off my own subjective estimates of these things, but I felt that it would be almost irresponsible of me to write an entire book about this and to only talk about what I think about it in just qualitative times. To say, “I think this is a serious or severe risk” without actually explaining, “Do I think that’s a one in a million risk that’s still worth taking really seriously”? Maybe like the risk that you die on the way to the shops in your car and the reason that you put on a seatbelt and take actions to avoid that risk. Or is it a risk that’s much higher? So I tried to give these order of magnitude estimates as to how much risk there is I think from these different areas. But it’s not necessarily the case that if you read the book that I feel that you’ll be compelled to these numbers. It’s not that I think that they’re an accurate summary of the two pages I spent explaining a risk that it would force you to this number, but rather I figured that the reader probably wants to know what I think about these things.

Toby Ord: So the one in six risk, in particular. Yeah, I think that this is my best guess as to the chance that humanity fails to make it through this century with our potential intact. So either because we’ve gone extinct or because there’s been some kind of irrevocable collapse of civilization or something similar. Or, in the case of climate change, where the effects are very delayed that we’re past the point of no return or something like that. So the idea is that we should focus on the time of action and the time when you can do something about it rather than the time when the particular event happens.

Arden Koehler: So the time of no return would be something like warming or climate change has gotten so bad that even if it doesn’t cause us to go extinct now, it might in the next few centuries or it’ll cause the collapse of civilization and we won’t recover or something like that.

Toby Ord: That’s the rough idea.

Arden Koehler: Okay.

Toby Ord: And you can think of that, say, in the case of an asteroid as a nice clear example. That it would be the last time where you could have launched a deflection program or the last time when if you’d started saving and stockpiling food, that there would have been enough or that you could launch a program to develop food substitutes or whatever the thing is. But that’s often the critical time and actually, on my definition of existential risk, that’s when the existential catastrophe happens. The point where we lose our potential rather than the point where people are killed or something else. And so one in six is my best guess as to the chance this happens. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?

Toby Ord: My best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.

Arden Koehler: Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?

Toby Ord: That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.

Arden Koehler: Yeah, and I think we can get into some more detail about this later or more specifics later, but I am curious… Okay, so you think basically because of efforts that people will make to reduce risk, we will approximately halve it from what it would be if we had just followed business as usual. What kind of efforts are you imagining and why do you think we’re going to make that kind of effort?

Toby Ord: Sure. I think that if you take the two risks that I think are the highest, which are risks from unaligned artificial intelligence and risk from engineered pandemics. And I think that in both cases, as these technologies get more mature: these are not things that I think are going to happen next year. I don’t really think that they kind of could happen next year, but I think that they could well happen over the next hundred years. And as the technologies get closer and we see signs that are impossible to ignore about the power of these technologies and that there are certain kinds of near miss events where we really witness the power of an uncontrolled controlled version of this thing, that these are probably going to wake us up to some of these things. And even before that, hopefully the world will get woken up to these things by people in this community concerned about these risks.

Toby Ord: And I think that the arguments are actually both strong in a scientific sense and also very compelling if they’re done right. So I really do think that everyone can take these ideas seriously. Historically, in the existential risk community and within effective altruism, existential risk is often talked about in a fairly nerdy kind of way, a very ‘mathsy’… Very much if you think about the two cultures of science and humanities, very much in the sciences culture. Talking about things like, you know, maybe even if there’s only a one in 10 to the power of 20 chance of existential risk, you know, if it saves an expected 10 to the power of 15 lives or something like this. But I don’t think that one needs talk about it like that. And I think that you really can make a compelling case to everyone that the potential destruction of everything that they value, of all cultural traditions that they’ve ever strived to protect, and every bit of potential for all the good that they could create in the future, that the destruction of this would be bad and obviously bad and also quite compelling. And so I think that when that’s all realized, if I’m right that these risks are large at all, then that will become more obvious and people will react.

Arden Koehler: It seems like this book is an attempt to help that process along. You use a lot of sort of moving and even sort of lyrical language to try to really make vivid what’s at stake.

Asteroids and comets [00:16:52]

Robert Wiblin: Yeah, so I guess you open the section on the specific risks talking about asteroids and comets. I guess because it’s one of the ones that’s characterized. I don’t think we’ve actually mentioned them basically at all on the show in the 70 or so episodes that we’ve had so far. What is the threat and how’s the likelihood figured out here?

Toby Ord: Well astronomers divide up the risk from asteroids based on the size of the asteroid. There are those that are greater than 10 kilometers across, which is about the size of the one that killed the dinosaurs. They’re considered extinction threats, although it’s still a bit unclear about what the actual probability of extinction would be were we hit by one. But that’s one size category and they think there are four of those that are near-Earth orbits and that we’ve found them all. They’re not a hundred percent sure they’ve found them all, but it’s been a long time since we last found one and we can scan most of the sky and they should be relatively easy to see. There are also asteroids that are between one and 10 kilometers across which they think could cause some kind of severe global catastrophe. Maybe the ones towards the higher end of that range could cause an existential catastrophe and they think that there are about 920 and they’ve found about 95% of them and they work out how many they haven’t found by looking at when they detect another one, what’s happening to the rate of new detections over time and so on. And then the risk from that ends up being that there’s about a one in a one and a half million chance in an average century of being hit by one that is 10 kilometers across. But in the next century, it’s much lower than that because we have detected them and we’ve basically plotted the trajectories and they’re not going to hit us in the next century. So the risk is a hundredth of that at best.

Robert Wiblin: Yeah. Interesting. Okay, so asteroids we’ve kind of understood. We scan the sky. We’ve found most of them. Or at least we think we’ve found the big ones. What’s the deal with comets? Because they go much further out. They have weird orbits. Are they harder to see? Is that right?

Toby Ord: Yeah, I would say a lot of things are worse with comets. The standard thing that NASA will tell you is that comets are about a hundred times less likely to hit us. That is true. But it turns out that there are relatively more bigger comets and fewer smaller comets. So that there’s a different power law that they’re characterized by. And so when you’re looking at the very biggest comets, say the ones that are 10 kilometers across, the raw probability is actually similar to the asteroids. But then they’re worse in a number of other ways. They’re often traveling faster relative to the Earth which, if you know, non-relativistic kinetic energy is E = mv^2/2, and so the velocity makes a big difference and they’re also harder to detect and harder to deflect because they basically come on these really elliptical orbits where they’re just coming straight at us from the distant reaches of the solar system, and we’d have to intercept them and do something about them while they’re diving straight towards us. So there’s a lot of challenges with them. And astronomers were right to focus on asteroids first because they are a bit more common even at kind of relatively big ranges. But I think that now they’ve done such good work on the asteroids, that maybe it’s more important to actually start characterizing the comets.

Robert Wiblin: And I guess that involves looking out into deeper space and getting good at finding these dim objects?

Toby Ord: Yeah, I mean I don’t really know exactly what they should do. It’s not that I have some simple advice to them to say, “No, no. Switch your telescopes to look at a different point in the sky”! There are certainly big challenges, and they might need radically new techniques in order to deal with it, so I would suggest that they should devote some time to blue-sky thinking about are there really different approaches that would actually let them detect these things? If everything depended upon stopping one of these things, sure it looks difficult, but are there any kind of novel ways that we could understand them better or try to do that?

Robert Wiblin: Yeah. So I guess we haven’t completely fixed the comet problem, but on asteroids it sounds like this was actually one thing where we very quickly got our act together. Maybe because it was cheap enough that one country could basically take this on. Yeah. Do you want to talk about that? I was impressed by how quick was the thing from discovering that asteroids are a problem to actually just finding most of them.

Toby Ord: Yeah. I mean this was another big surprise for me when writing the book. The whole idea of asteroids is strikingly recent. So it was only in 1960 that it was scientifically confirmed that asteroids are what cause impact craters.

Robert Wiblin: What did they think before that?!

Arden Koehler: That’s crazy.

Robert Wiblin: It could be volcanoes?

Toby Ord: Yeah, so volcanoes also cause craters. And so they thought maybe it was some kind of geological activity that produced these things. There was debate about it. People had already thought that meteorites, these small rocks, that they fell from the sky. But they weren’t sure that there were big enough ones to create craters. They’d never been observed happening. And so in 1960, they confirmed that. Then in 1980, they worked out that they could cause mass extinctions with the Alvarez hypothesis by the father and son team. That that’s what caused the extinction of the dinosaurs. So that’s 1980; I was one year old when that research was done.

Arden Koehler: So before that they weren’t sure what caused the extinction of the dinosaurs?

Toby Ord: No, and while I was a kid, it took a while to filter down to the primary school level as well. It was an interesting hypothesis whereas it’s still not totally sure, but it’s looking quite a lot like a smoking gun. But yeah, it’s quite recent. And then after that, the community really got their act together and they approached Congress in the US and they had bipartisan support for it, for creating a spaceguard program. And then a couple of events happened, which really helped to cement public interest. There was the Shoemaker–Levy comet, the thing that crashed into Jupiter.

Robert Wiblin: Oh yeah, I think I remember that from my childhood!

Toby Ord: Yeah, exactly. And it left a mark, I think, the size of the Earth in the clouds of Jupiter.

Arden Koehler: That is poignant.

Toby Ord: Yeah, because they saw one of these things happen, it was in the news, people were thinking about it. And then a couple of films, you might remember, I think “Deep Impact” and “Armageddon” were actually the first asteroid films and they made quite a splash in the public consciousness. And then that coincided with getting the support and it stayed bipartisan and then they have fulfilled a lot of their mission. So it’s a real success story in navigating the political scene and getting the buy-in.

Arden Koehler: So that does seem strikingly successful compared to at least what I would guess is going to happen with many of the other existential risks that we’re going to talk about. How optimistic are you that at least for some of these others, we might be able to apply some of those lessons and have something similar happen?

Toby Ord: Yeah, I think not that optimistic.

Arden Koehler: What’s different about asteroids and comets?

Toby Ord: Yeah, I mean part of it, as I kind of suggested from that is that they got a bit lucky in terms of these films and the interest from that and this natural event with Shoemaker–Levy. It was, though, something that seemed very out there prior to that. Now those of us studying existential risk treat this at least as a fairly well understood example, a kind of poster child for something that people agree would cause an existential catastrophe if it happened. And that, you know, is less tendentious than some of these other types of arguments. But it’s still a pretty weird thing to be thinking about and they would have had to overcome those challenges. So there’s some hope from that.

Robert Wiblin: Are there any sort of asteroid deniers? I think that’s one benefit. The physical mechanism of a rock smashing into the Earth is sufficiently clear and indisputable that you can’t really have a movement that’s like, “No, we’re wasting our money discovering all the asteroids”.

Toby Ord: I guess that’s right.

Robert Wiblin: Yeah. I guess particularly compared to novel risks from AI or biotechnology that we’ll get to. It’s much easier to get everyone on board.

Arden Koehler: Although with bio at least and AI, it seems like there is a decent amount of media attention. There are movies that make them seem like existential risks. I think people don’t usually draw that sharp distinction between extinction and just very terrible, horrible catastrophe. But anyway, there’s at least some of that going on for some of these other risks.

Toby Ord: That’s right.

Supervolcanoes [00:24:27]

Arden Koehler: Yeah. So one thing that was surprising from the book for me was how high the risk is from supervolcanoes. So it seems like it was something like one in 200 this century and that seemed really high. It’s also a sharp risk because apparently we wouldn’t really be able to tell that it was coming, which makes it even more scary. So in all, it seems like maybe one of the most serious natural risks, but I don’t feel like people talk about it very much. So I was surprised by that. Why do you think people don’t talk about it very much?

Toby Ord: Yeah. Well there’s a few things going on there. So a supervolcano is, rather than a kind of cone towering above the Earth, it’s the type of volcano that’s so big that when it erupts, it produces more than 1000 cubic kilometers of molten rock that pours out of it. So that’s the threshold for being a supervolcanic eruption. So Yellowstone Caldera is like the most famous example of this. And the one in 200 is the chance of an eruption that counts as passing this threshold within the next century. But that probably wouldn’t kill us as evidenced by the fact that we have survived 2000 centuries so far, and so my overall number for the chance of existential risk from supervolcanic eruptions is about one in 10,000 over a century. As to why this isn’t more well known, I think that the real striking thing is the question, why is asteroids so well known? But they’ve both only really been discovered very recently. And there was a bit of controversy though, a while back, roughly 20 years ago I think to do with the Toba supervolcanic eruption, which was about 74,000 years ago, and it seemed to line up with a fingerprint in the human DNA evidence, which suggested that there was a genetic bottleneck at about the same time. So people thought that maybe humanity was nearly wiped out by this thing. But as people have looked into that more, it appears that the times don’t quite line up and people are more skeptical that it could have caused this kind of devastation. They do have a level where ash rains down on the scale of a continent and potentially on other continents as well is found from these things. They are very big. But it’s still unclear how they could cause our extinction.

Robert Wiblin: So they’d kind of produce a nuclear winter. So it’s a supervolcano winter and that’s the threat. That things start collapsing because there’s not enough food.

Toby Ord: Yeah, that’s right actually. And it’s interesting that you mention it. So the mechanism from supervolcanoes and from asteroids and from nuclear war… The main mechanism for causing existential risk is via a nuclear winter or volcanic winter or asteroid winter where particles get up into the stratosphere so high they can’t be rained out and then they cause global cooling: cooling, darkening and drying. But it’s the cooling that’s the main one because it shortens the growing season for crops. So that’s the main concern. And interestingly, for all three cases, it is a form of climate change and it is mediated by atmospheric science which is the subject that studies this. So if you look at the size of these asteroids, 10 kilometers is very big. It’s the size of a mountain. But it’s very small compared to the Earth. And the kind of image you might have in your mind if the asteroid ploughing into the Earth. It is more of a pinprick than two things of similar size.

Robert Wiblin: But so much dust.

Toby Ord: Yeah. But it is the dust. I guess these types of people who, you know, which would make them a climate denier, they could deny that there would be these dust effects and things. And in the case of nuclear winter, there was a lot of denying of this. There’s a lot of pushback that Carl Sagan and others received on the theory. Partly because it was a politicized issue somewhat like we’re seeing with climate change. So one could push back on supervolcanoes and asteroids for the same reason, but you don’t see that so much because it’s not politicized.

Arden Koehler: It’s interesting that so many different risks share this same mechanism. It suggests that one of our biggest vulnerabilities is our atmosphere, or our access to sunlight.

Toby Ord: Yeah. That’s right. And there’s a useful way of thinking about this, which is that once there’s some kind of event, is the event so big that it just would obviously destroy the Earth? For example, if an entire planet crashed into the Earth or something where it’d be pretty obvious how it gets big enough. But in other cases, there’s this question about how does it scale up to be something that could threaten us all? How does it get everywhere, like to all the humans and so on. In the case of all of these things, what happens is that the atmosphere is what takes a thousand kilometers of rock or square cubic kilometers of rock or what have you, and distributes it in such a way to create this opaque layer around the Earth. And without the atmosphere doing that, it would be more of a regional catastrophe. And then the atmosphere is also important in climate change and also temperature changes are also important there. And the effects of temperature change potentially on crops. So there’s actually quite similar things about some of these natural catastrophes and even some anthropogenic ones that are quite interesting.

Robert Wiblin: I wonder if the reason that supervolcanoes are less known is that it just makes for a worse plot of a movie. So I guess with an asteroid, you have this lovely property that you find it way before and you’ve got everyone freaks out and then you’ve got a story where you go and try to intercept it. But with a super volcano, in reality, it would happen like very unexpectedly and very suddenly. And so I guess it’s a survival movie, whereas you know, people are trying to minimize the number people who die in this disaster, but that’s less cool than going and like blowing it up.

Toby Ord: That could be right. Definitely the “Armageddon” versus “Deep Impact”. They went with the more machismo approach there. I guess when I think about it as a movie, maybe it would be interesting. You could probably make just as interesting a movie, there’s still something about the supervolcanic eruptions that seems faintly ridiculous to me, but I’m not sure that they are any more ridiculous than asteroids, or that it gives me any reason to doubt the science. But in terms of that aspect about, do things just seem too weird? It’s interesting that that’s an example where I still feel it’s weird. I have a little bit of trouble taking it seriously.

Arden Koehler: Can you tell which part of it is weird or that is making you feel like it’s just ridiculous? Or is it hard to say?

Toby Ord: I guess the name wasn’t great. Supervolcanoes sounds a bit comic book. And also, when you think of a volcano, you know, Vesuvius, and then destroying the whole world. Whereas if they just had a totally separate name, a different kind of geothermal activity that causes different kinds of destruction, maybe it would seem just a bit more normal and you’d think, “Oh, I guess that’s interesting”.

Arden Koehler: I’m curious why it would be hard to tell if a supervolcano was coming? It seems like, extremely naively, we do have access to the parts of the Earth that are going to be changing leading up to this, so why would it be so unexpected?

Toby Ord: I think the answer to that is mainly because it’s unprecedented. So, for example, suppose we discovered a sharp increase in geothermal activity at Yellowstone or something and our best kind of detectors showed that there were large amounts of magma moving under the surface and so on. Well then, what would we say about that? We don’t have a track record of correlations between large amounts of magma moving under there and how long it is or what the chance is that that then leads to it causing a supervolcanic eruption because we have witnessed zero of them. The information we do have is from the kind of debris that they’ve scattered by previous ones and finding the calderas and trying to investigate them, but we don’t have access to what were the precursor signs before that. So if we saw something really striking happening, maybe we’d think there’s at least a 10% chance something really bad’s going to happen over the next century. But maybe it would happen in one year or maybe it would happen in 70 years. And we’d have just very little idea about it. Whereas with asteroids, once we detect them, then it’s just high school physics to then calculate how long it will be before it hits the Earth.

Robert Wiblin: Yeah. So David Denkenberger in episode 50, he’s got a list as long as your arm of various engineering ideas, and the ones that we spoke about in the interview would be how would we feed people through a volcanic winter and he’s also got a bunch of like how would you stop a supervolcano from erupting if you thought it was going to, including just like building a mountain on top of it which apparently no one has really investigated in any depth to see whether that would work or whether it might just increase the risk because then you’re bottling up this thing and then it’s even worse. I suppose that’s something that governments could potentially fund, because I don’t think there’s been much work on this.

Toby Ord: Yeah, it’s an interesting question. One thing I’m worried about with it is the chance of making it worse, because just the baseline risk is very low, and we know it to be low from our survival. So it seems that if we then do something that may make it better or may make it worse, we’re starting from such a low level that I’m not sure that we’d want to be taking those risks.

Robert Wiblin: Yeah, so if it was a one in 200 chance of making it worse or something like that.

Toby Ord: Yeah, then maybe I’d probably be pretty happy about it. But yeah, you see the problem there, as well as the political risks of… there could be an asymmetry between kind of acts and omissions in terms of the political ramifications of intervening in supervolcanoes.

Threats from space [00:33:06]

Robert Wiblin: Yeah, that makes sense. All right, so another one that you go into, which I knew very little about, is threats from space. So we’ve got supernovae and things like that. What things can come at us from space and why do we think it’s basically a negligible risk in practice?

Toby Ord: Yeah, so sometimes stars explode. Supernovae: we’ve known about those for a very long time when Chinese astronomers catalogued a new star appearing in the sky. But it’s only quite recently in the last hundred years that we’ve realized that this could actually be a risk to humans if it happens close enough. The idea is that the radiation from this would cause chemical reactions in the atmosphere, which would produce nitrogen oxides, which would then severely damage the ozone layer, which would cause extra UV exposure. That’s their kind of best known mechanism. But extra UV exposure–

Robert Wiblin: Doesn’t sound that bad.

Toby Ord: It doesn’t sound that bad. Supernova sounded really bad.

Robert Wiblin: Just stay indoors!

Toby Ord: But supernovae happening many light years away is the actual case. We’re not talking about our sun turning into a supernova. That then starts to sound less clearly bad and then the mechanism doesn’t sound that bad again. Yes, like what about shielding and so on? Stay indoors, maybe you need to wear protective suits to do farming or something. And, at the moment actually, most farming is done by automated tractors, kind of under GPS and so on that just work night and day and don’t have humans out there.

Arden Koehler: They don’t have humans in tractors anymore?

Toby Ord: At least I think not in the UK and the US.

Arden Koehler: I didn’t know that.

Toby Ord: Yeah, they can work all night as well because they don’t need to see. So I think that the probability of this happening is very low and the mechanism doesn’t sound that plausible and the risk would be that as a supernova, not like any star out of the 100 billion in the Milky Way, but any star within about 30 light years, maybe say one of the closest thousand stars turning supernova. And none of them look like they’re in a process where they’re nearing the stage of their life where they might do that. The other kind of risk that you were alluding to is that of gamma ray bursts. And that’s a thing that was only just discovered very recently. In the cold war, the Americans developed satellites to detect the gamma ray flash of a nuclear test to see if there are other countries, particularly the Soviet union, who were doing nuclear tests. And then they found their detectors going off a lot and they tried to work out what was happening, and they realized that it couldn’t possibly be coming from the Earth and they discovered these gamma ray bursts, which were happening from other galaxies, and the radiation was so intense that we could detect it here, including, I think there was a case of it happening billions of light years away that was detected.

Toby Ord: So a gamma ray burst can be triggered by either a rare type of supernova or two neutron stars crashing into each other. So fairly exotic phenomena that doesn’t happen very often but can be felt a long way away. And basically, very roughly, it’s about the same amount of energy released as in a supernova, but instead of being released, spherically symmetrically, it’s released in these two cones at the poles. And so it could reach much further because it’d kind of be more intensely concentrated, so it could reach from other galaxies. Well detected from other galaxies. It couldn’t kill you from other galaxies. I looked into it in some detail, and there was a lot of concern about this cone angle business, that if the angle’s very narrow, then it could get you from very far away.

Toby Ord: But it turns out if you actually do the maths on it, the volume that it irradiates at a lethal dose is exactly the same regardless of the cone angle. It’s just a narrow thin thing versus a thick wide thing. And so it’s a bit of a red herring, I think. And it ends up irradiating a similar volume to that of the supernova, which is not that large a region and therefore is extremely unlikely and it’s unlikely to cause an extinction event in the whole time that the Earth will be habitable.

Estimating total natural risk [00:36:34]

Arden Koehler: So abstracting a little bit away from these particular natural risks, you also have a way of estimating the total natural risk. Do you want to just tell us a little bit about what that is?

Toby Ord: Sure. The basic idea is like this. We have this catalog of risks that we’ve been building up: these things that we have found that could threaten us. A lot of which we just found in the last hundred years. So you might think, “Well, hang on, what’s the chance we’re going to find a whole lot more of those this century or next century”? Maybe the chance is pretty reasonable. If you plotted these, which maybe some enterprising EA should do, over time and when they were discovered to see if it looks like we’re running out of them. I don’t think that there are particular signs that we are, but there is an argument that can actually deal with all of them, including ones that we haven’t yet discovered or thought about, which is that we’ve been around for about 2000 centuries: homosapiens. Longer, if you think about the homo genus. And, suppose the existential risk per century were 1%. Well, what’s the chance that you would get through 2000 centuries of 1% risk? It turns out to be really low because of how exponentials work and you have almost no chance of surviving that. So this gives us a kind of argument that the risk from natural causes, assuming it hasn’t been increasing over time, that this risk must be quite low. In the book I go through a bit more about this and there’s a paper out that I wrote with Andrew Snyder-Beattie, where we go into a whole lot of mathematical detail on this. But, basically speaking, with 2000 centuries of track record, the chance ends up being something less than one in 2000 is what we can kind of learn from that. And this applies to the natural risks.

Arden Koehler: So the basic idea is nothing’s really changed when it comes to supervolcanoes or asteroids in the last 2000 centuries. So if we’ve survived this long, we shouldn’t expect to have anything approximately above a one in 2000 chance of witnessing something like that this century.

Toby Ord: That’s right. If anything’s changed, it’s that we’re more robust against them.

Arden Koehler: Yeah, so at most.

Toby Ord: We spent about 130,000 years over the last 200,000 in Africa. So just being a one continent. If there was something that could have changed the climate of a continent, maybe we would have been vulnerable. Now we’re spread across many continents. We have many more different types of crops that we use and many new technologies and so on that seems to make us more robust. So if anything, it seems like the chances are decreasing. We’re becoming less vulnerable to it.

Robert Wiblin: Yeah. You write in the book that this doesn’t fall foul of the anthropic concern that if we’d been wiped out by one of these things, then we wouldn’t be around to see it and make these estimates. But I’m not sure I completely understood why there’s not a big adjustment from that.

Toby Ord: Yeah. So I think what you’re getting at there is that someone might say, “Well, the chance can’t be that high or we wouldn’t be here”. And then you can say, “Well hang on a second. If you imagine that there were a whole lot of different planets and maybe the chance was high, while the only survivors would be on the planets that happened to get lucky, so this argument would be misleading them”. Maybe our Universe like that, but is that the kind of thing–

Arden Koehler: We’re the lucky ones.

Robert Wiblin: Yeah. So you can imagine that you start with thousands of planets with humans on them. And then no matter what the risk is, there’s always the people on the remaining ones going, “Well, the risk is really low”! And then like 99% of them get wiped out every time.

Toby Ord: That’s right. So here’s the kind of thinking, though. And Nick Bostrom and Max Tegmark have a great paper on this where they were looking at the risk of vacuum collapse, which is a risk that I only touch on for about a sentence in the book, where the idea is that is there some chance that the vacuum in our universe is not the lowest energy vacuum and that it could decay to the lowest energy vacuum producing an explosion that would travel out at the speed of light changing all of the vacuum to this new thing creating huge amounts of energy.

Robert Wiblin: I really feel like I should google this and figure out what the hell people are talking about when they talk about vacuum collapse. Yeah, anyway, we shouldn’t get distracted by that.

Toby Ord: Well, if you’re thinking that the issue is that if there’s nothing, how can it collapse–

Robert Wiblin: Right, how does nothing collapse?

Toby Ord: Well, the idea is that the vacuum is not nothing. It’s a low energy state, but not the lowest energy state and so there is some amount of energy that can go down further.

Robert Wiblin: But if a little bit of it goes to the lower level of energy, why does that create an explosion of lower energy? Maybe we need a physicist.

Toby Ord: Yeah, you might want a physicist. The idea is meant to be like a crystal in a supersaturated solution where when you have this–

Arden Koehler: All clear to me now!

Toby Ord: Yeah, let’s leave it there…

Robert Wiblin: Yeah. Alright, that was a diversion. You were saying that there’s this paper about it.

Toby Ord: Yeah, there’s this paper about it, and you can say, “Well, what is the chance of this risk? Maybe it could be really high and we wouldn’t know because we’d still see ourselves here”? But then the idea is, well hang on a second, if it was high, we are 13.8 billion years into the Universe’s time period. There have been planets that have formed much earlier than the Earth and suppose the risk you were saying could be as high as 50% per century. Then you could say, “Well, couldn’t humanity have just evolved in a million years less? Like, you know, just 0.1% faster of the evolution of life on Earth. How unlikely would that have been? And it would have saved us 10 centuries, which is a factor of a thousand in the probability. So you end up kind of saying that we basically would find ourselves at the earliest possible time that we could find ourselves in those cases.

Arden Koehler: I don’t understand. We should find ourselves at the earliest possible time. But here we are and it’s not the earliest possible time. How do we know that?

Toby Ord: Oh, so we don’t just know that from that. We would need to notice, for example, that there were planets that seem to be just as habitable as the Earth which formed a billion years earlier, for example. And the reason why we know we will be astronomically more likely to find ourselves on such a planet than on this planet that happened to just get lucky for an extra 10 million centuries of risk.

Arden Koehler: I see! So this is evidence against the idea that there are tons of possibilities for humanity to evolve, and we’re just the lucky ones. It actually looks like there weren’t tons of possibilities, so then it would be astronomically unlikely for us to get lucky anyway.

Toby Ord: That’s basically right. And the idea is that you could run a somewhat similar argument to do with how long we’ve been around. That there’ll be almost no one discussing this question. I mean how many planets would there have to be before there were people who managed to get 2000 centuries further on and are having this discussion. Almost all the people who kind of were wondering about this would be much earlier in the history of their species. Anyway, this is where it’s trying to make the disanalogy is that this number of how long we’ve survived so far is the thing that’s supposed to be breaking out of that anthropic situation where we couldn’t say anything about the probability of events where we could only witness being alive because we could witness different lengths of survival time. And so that’s where the information’s is coming from.

Arden Koehler: But surely if we’d survived much shorter than we actually had, we wouldn’t have had time to get smart enough to ask these questions?

Toby Ord: That is where some complications come in and it’s to do with reference classes and things which are very confusing bits of anthropics. I would say that anthropics is such a complicated thing, generally. I don’t rely on it at all in the book. As in, I don’t make any arguments that actively make use of it. But I may be vulnerable to, if certain theories of anthropics turn to be true, then that creates challenges to some of the things I say in the book.

Arden Koehler: Okay, well somebody should go away and figure this out.

Robert Wiblin: I think I get it, but I’ll see if I can find a blog post level explanation of this for myself and listeners and stick up a link to that in the show notes.

Arden Koehler: So you make use of the fact that there are all these figures on what are typical lifespans for species. But it seems like the typical lifespan of a species actually varies a lot and so you give some examples of that. Horseshoe crab has a lineage of 450 million years. The Nautilus, 500 million. Sponges, 580 million. These are all very tiny, simple species. We are not like that. Is there some, I don’t know anything about this, but is there some pattern that suggests that these tiny, simple species are often much more long-lived and that the natural extinction rate of bigger, more complex species is higher?

Toby Ord: That’s a good question. I don’t know what the overall relationship is there. One of the issues that’s a confounder is that most species are much smaller than us. At least most animal species. We’re actually very large animals. We fixate on the ones that are bigger than us, like elephants and tigers and things. But there are a very small number of elephants and tigers and things and a very small number of such species as well. So most species are small species. Therefore one would expect most of the long-lived forms to be small as well. And it’s hard to factor out that. But there’s also the question about being marine species has made them much safer from some of these natural catastrophes as well.

Robert Wiblin: Also, I think all of these are like at the bottom of the ocean as well, which might be an even more stable environment.

Toby Ord: Yeah, that makes a lot of sense. But I think that they do point the way to what might be possible. If we can protect ourselves from threats to the same degree to which, say, these marine species can protect themselves from threats, create safe environments for ourselves and so on. Then there’s kind of no reason that we couldn’t last for hundreds of millions of years.

Arden Koehler: So this kind of suggests an upper bound or not even an upper bound, but just showing like, “Hey, the horseshoe crab did it”.

Toby Ord: Exactly.

Arden Koehler: And we can too.

Toby Ord: It’s a kind of proof of concept thing. That’s the kind of bar for what we might hope to achieve. I think that we’re often quite unambitious in our hopes for the future and trying to exceed what the horseshoe crab did could be a lower bound on our ambitions for the future.

Robert Wiblin: All right, we’ll return to that in our vision for utopias.

Arden Koehler: You should’ve put a horseshoe crab on the cover of the book. I feel like that would’ve been cool.

Robert Wiblin: Yeah, I like that symbol. Yeah. Maybe we should respect the elephants less and the sponge more.

Distinction between natural and anthropogenic risks [00:45:42]

Arden Koehler: I want to just introduce the distinction between the natural and anthropogenic risks and why you feel like this is such an important distinction. So you talk about the fact that we’ve been around for 2000 centuries as a big source of evidence that these natural risks are pretty low. Maybe bracketing some anthropic considerations, but you think that basically doesn’t give us any evidence or maybe it only gives us a little bit of evidence when we’re thinking about how vulnerable we’re likely to be to risks that are caused by human action. And that feels like a really important line of argument in the book. So I thought you could just talk about it and talk about why you’re so convinced by that.

Toby Ord: That’s right. This doesn’t show that the anthropogenic risks are high, but you might’ve started with some kind of prior probability distribution over how likely it is that we go extinct from various different causes. Perhaps you were thinking about asteroids and how big they were and how violent the impacts would be and think, “Well, that seems a very plausible way we could go extinct”. But then, once you update on the fact that we’ve lasted 2000 centuries without being hit by an asteroid or anything like that, that lowers the probability of those, whereas we don’t have a similar way of lowering the probabilities from other things, from just what they might appear to be at at first glance. So that’s not to say that they are high, but that they don’t have this nice reassuring way of making sure that they are low. There are separate arguments that perhaps suggest that they’re high.

Toby Ord: And that one important kind of exception is pandemics, which I imagine we’ll come to later. But there’s something where there are plausible stories about how what we think of as natural pandemics are actually closely interlinked with human activity, both that we might be at higher risk of initiating them and that if they did happen, we’d be able to spread them around more because we’re more interconnected than we’ve been in the past. So this is a rare case of something that we’d often think of as a natural risk, but is something that the way the risk has been plausibly increasing over time, so we can’t help ourselves to this argument in that case either. And so for that reason, I don’t categorize them as a natural risk. And I think that’s actually just a useful way of making that division. It’s always hard to make the division between the natural and the artificial because everything’s natural. At some level. We are natural too as humans. And this is actually therefore a convenient place to draw that line as to the ones that we’ve got this kind of safe argument for.

Arden Koehler: So do you think that something like “natural pandemics”… Can we use the fact that we have survived for 2000 centuries at all when thinking about how likely they are to wipe us out or not at all? It’s somewhere in between?

Toby Ord: Yeah, I think it’s interesting. I hadn’t thought about this much actually while writing the book until you mentioned it, but we can kind of use something about this fact that we survived the natural risks to suggest that the risk from anthropogenic things might be a bit lower than you would naturally think because some of the mechanisms whereby we survived these other things might suggest that we’re more resilient than you might’ve thought at first glance. Whereas it also could just mean that the initiating events, such as the number of asteroids that collide into the Earth is low. That wouldn’t help us feel safe about future things. But also, some of the reason could be to do with resilience and so that would actually help us feel a little bit safer.

Arden Koehler: Yeah, I guess maybe there have been climate events that we have survived, which might give us some evidence about how likely we are to survive through climate events that are caused by people.

Toby Ord: Yeah, that’s a good example. So we’re getting more into the particulars than merely the fact that we’ve been around for 2000 centuries. But we’ve been through some quite dramatic climate events. We have lived through a glacial period and interglacial periods. And so we came out of a glacial period around about the time when agriculture started about 12,000 years ago. So that was a time when there was radical change to the Earth and the environments across the globe. But actually it turned out to be very good coming out of that and seemed to be a precursor to allowing us to have civilization. And that’s not the only kind of glacial-interglacial change that we’ve been through. So you could kind of read something into that about the levels of temperature change that we’ve had that’s between the current level and colder, not the current level and hotter.

Robert Wiblin: I guess another thing that I find generally reassuring is that when I look inside my internal model of the world, in my mind, I feel like just the wheel should be coming off all the time and there should be tons of midsize disasters that kill millions of people, but this just doesn’t seem to happen nearly as much as I would predict, which means that my model is kind of missing something. Perhaps I’m just missing how much effort people put into preventing disasters or how good they are at predicting them and seeing them off.

Toby Ord: I think that’s a very good point. One often thinks, “Well, you know, what would stop someone doing this heinous thing”? And you realize, “Oh my God, there’s almost nothing to stop someone doing that”, and then you think about the track record and it seems to show you something about the rate at which people are actually motivated to do that terrible thing. And there’s something important there that you’re learning or perhaps that they’re stopped, or nipped in the bud, or things like the way that moral education for people in schools and things is kind of stopping people from having those ideas or it’s detecting them early on. So there are indeed some reassurances we can get from that. But things change, as we might get to, when it comes to biorisk. If the tools to do something terrible increase so that a thousand times as many or a million times as many people have them available, then it might be that the historical track record is only a rather small number of people who could done this terrible thing. And if you then think, “Oh, now we’re going to get a hundred times as many people having the ability as have ever had the ability before”, the historical track record tells you very little about what would happen there.

Robert Wiblin: Yeah. I guess it also might change the psychological stability level that you need to get to in order to have access to these weird technologies. Again, as much as you have to be a professor to get access to it. Maybe you’ve gone through a bunch of filters in the first place.

Toby Ord: Yeah, that’s a good point.

Climate change [00:51:08]

Robert Wiblin: Let’s push on and talk about climate change. This is one where you said you’d changed your mind a whole bunch and I suppose my background assumption on this… My guess has been, well, climate change is going to be really bad, but surely it can’t drive us extinct. Surely it’s not going to actually end civilization. People are exaggerating when they say that. I’m really glad that you’ve actually put in some time to look into this and try to form a view. Yeah, what did you learn?

Toby Ord: Yeah, it’s complicated. I’ll give you a whole lot of reasons that you should be more concerned and also reasons you should be less concerned. How they all balance out is a bit unclear, but I’ll treat you to some interesting observations on it. So, when I first looked into it, one thing I wanted to be able to say when writing a section on it is, well, some people talk about the Earth becoming like Venus. So having this runaway climate change where the oceans start evaporating, creating more and more water vapor in the air and that this can run to completion and basically we’d have no oceans and the temperature goes up by hundreds of degrees and all life ends. Or at least all complex life. So I wanted to be able to at least say, “Don’t have to worry about that”.

Toby Ord: It turns out that there is a good Nature paper saying that that can’t happen no matter how much CO2 is released. You’d need brightening of the sun, or to be closer to the sun for it to happen at any CO2 level. But normally one Nature paper saying something, you know that’s enough to say, “Yeah, probably true”. But there’s a limit to how much epistemic warrant can be created by a single Nature paper on something. But it still seems like that probably isn’t going to happen and no one’s really suggesting it is going to happen. There’s another thing that was a bit alarming there with something called a ‘moist greenhouse effect’, which is similar, but doesn’t go quite as far, but you could still get something like 40 degrees Celsius extra temperature. And the scientists are like, “Oh yeah, I mean you can’t get this runaway, but you might be able to get this moist one”. And from a lay person’s perspective, you think, “Well hang on a second, why didn’t you include that in the other category? I thought when you were giving reassurance as the other thing wasn’t possible, that you weren’t saying there’s a thing that’s, for all intents and purposes, identical, which is perhaps possible. And that one also probably can’t happen, but people are less sure and there are some models that suggest maybe it can.

Arden Koehler: So when you say for all intents and purposes it’s similar, you’re thinking because 40 degrees warming would be all but guaranteed to wipe everyone out?

Toby Ord: Yeah, I guess to the extent to which even a hundred degrees of warming is. You know, maybe we could build giant refrigerated areas where some people could survive and so on and we could come back. If you think about saying the chance that we could set up a permanent base on Mars or maybe a permanent base on Venus–

Robert Wiblin: Antarctica, maybe?

Toby Ord: Yeah, Antarctica. It doesn’t seem implausible that we could do such things, say in the next hundred years. And so maybe it’s not implausible that we could also, if with a smaller population, kind of weather such an event. But it’s looking pretty bad and there wouldn’t be much of a discussion about, “Is that an existential risk or not”, if we thought that was still happening. So to be clear, I don’t think either of those things are going to happen, but I have found myself, unfortunately, not being able to rule it out to any kind of particularly strong degree of confidence. That’s the first bit.

Robert Wiblin: Don’t they fall foul… I mean, the Earth’s been around for billions of years. The temperature’s gone up and down. It’s been, I think, quite a bit hotter at some points than it is today. And yet, you know, the oceans didn’t boil away.

Toby Ord: Yeah, it’s been much hotter. And this was the line of evidence that I was hoping to use to settle the issue on this in order to then delineate the part of conversation that needed to happen to say, “Don’t worry about those things that you might’ve heard. Worry about these other things and then here’s how they could work”. But unfortunately, this so-called ‘paleoclimate’ data about the long distant past and what the climate was like, it is not that reliable. And also the Earth was different in many ways when these things happened. For example, sometimes when you had these different temperatures, there was a supercontinent instead of the current situation where the continents are all divided up and these caused very different effects in the atmosphere and so on. So the paleoclimate data, you couldn’t just make that kind of assumption that, “Hey, it’s been way higher than this in the past, therefore if it goes way higher it’s not going to cause this problem”. And also, there’s a lot of concern that the rate is important as well as the level of the temperature. And that’s something where the rate of warming at the moment, I think, could well be unprecedented in the history of the Earth. Again, the evidence isn’t great on that because if you think about the temporal resolution that we have, we’re only really measuring the temperature at kind of times many thousands of years apart.

Toby Ord: So it’s hard for us to know if it was actually very spiky in the intervening periods. But it’s at least quite plausible that even though it’s not plausible that this is the hottest that the Earth’s ever been, it is plausible that it is the highest rate of warming and also that that could precipitate serious problems. So unfortunately the paleoclimate data while somewhat reassuring is not as reassuring as I’d hoped going into this book.

Robert Wiblin: It’s not dispositive. Okay. So do you want to carry on with the other ways that things go really badly?

Toby Ord: Yeah. So there are various feedback effects that can happen where warming creates more warming. I should say that these are the amplifying feedbacks. There’s also stabilizing feedbacks, where more CO2 release actually then creates more of a sink for CO2. So it’s complicated. There are both kinds of feedbacks. And there are certain effects though which could produce very large effects. So I’ve focused on the ones in the book as to what could do the biggest things? And so the two that I focus on in particular, are the permafrost and the methane clathrates. And so these are two kind of big stores of carbon. One is in the tundra and I think also under the sea: the permafrost. And the other is methane clathrates: an ice-like substance at the bottom of the ocean floor.

Robert Wiblin: That’s full of methane?

Toby Ord: Yeah, that’s right. And both of them contain far more carbon than all emissions so far and in fact more carbon than the entire biosphere. So if they were completely released, we could get very severe warming, much more than from all of our fossil fuel use. But, scientists think they’re probably not going to be all released or if so, it would be extremely gradual over many centuries and so forth. But it’s kind of hard to rule out. Like again, it would be nice to be able to say, “Oh, when you look at it, you find out that it’s still only a quarter as much as we’ve ever released or something like that”. That’s not the case. We can’t help ourselves to the kind of safety on that.

Robert Wiblin: Or we have this superstrong argument why it can’t happen. Why they can’t all melt. We just don’t.

Toby Ord: No we don’t. Scientists aren’t greatly alarmed. They’re not saying that that’s definitely going to happen precipitously or something. By the same stock it’s hard to put bounds on it.

Robert Wiblin: Do you have a sense of how much the world would warm if the methane clathrates just all started melting and the methane went up into the atmosphere?

Toby Ord: So it’s very hard to estimate these things because they go so far outside the known range for the models, but attempts to estimate a very similar thing of what would happen if we burned all known fossil fuel reserves, where they were looking at 5,000 gigatons of carbon, which is actually about the amount in the methane clathrates, suggested between nine and 13 degrees of warming.

Robert Wiblin: Okay, so quite a lot.

Toby Ord: Yeah, a really large amount.

Robert Wiblin: Yeah. I guess coming back from these more exotic scenarios to just the main line thing of what if we just keep burning a whole lot of fossil fuels. Yeah. How did your view shift on how likely that is to be a real disaster for civilization?

Toby Ord: Yeah. I think one of the key numbers here is this thing called the ‘climate sensitivity’. And this is the number that represents how much warming would there be if we doubled the amount of CO2 in the atmosphere. And it’s relatively easy to understand that if there were no feedback effects. However, when there are feedback effects, particularly some of them that are very hard to model, there’s a lot of uncertainty, and the current estimate is that if we doubled the CO2 emissions level, as in the level of CO2 in the atmosphere, that there would be between 1.5 and 4.5 degrees of warming. But unfortunately, this is a very big range and this is actually kind of wild amounts of uncertainty. So the high end is triple the low end and that’s not a 95% confidence interval. That is a two thirds confidence interval. So they’re saying that, “Well, you know, it could be one and a half degrees or it could be triple that”. And when you combine that with the uncertainty about how much we’re going to actually emit, how high the level is going to go. For example, if you think it could be between one and two doublings of pre-industrial amount of CO2 in the atmosphere, then you end up with an estimate of warming between one and a half and nine degrees, which is a extreme range of outcomes.

Arden Koehler: It does seem pretty plausible that we could end up emitting as much carbon in the atmosphere again and then again, especially over like all time because these things are cumulative.

Toby Ord: That’s right. And I don’t think that we are going to stay on the “business as usual trajectory” or something and just keep following this curve of exponential carbon emissions. But, you know, it’s not impossible. It’s a social science question. One where it’s impossible to kind of really be having 99% confidence in these things and so on. I can imagine scenarios where that could happen. Where, for example, there’s a new Cold War and it’s in one of the superpowers’ interests to just emit as much as possible and they just go for it.

Arden Koehler: Or even if the rate of emissions goes down but it continues to be positive, then it might just take longer but we could still see really substantial warming. Although of course if it takes longer, then we’ll have more time to adapt.

Toby Ord: Yeah, but I agree that we could have really substantial amounts of emissions.

Robert Wiblin: I think something that surprised me was just looking at, well we’ve got uncertainty about the emissions of maybe 2x, possibly 3x. Then we’ve got uncertainty within the model which is big, like 3x difference of the climate sensitivity. And then we’ve also got out of model uncertainty, which is like, “Well, what if our model of this is quite wrong? Then we should increase that even further”. Because yeah, there’s ways that we could be wrong that we haven’t even thought of yet, but they’re not included in this climate modeling. Then you’re like, “Well, I guess 12 degrees is not that inconceivable”. It could be massive. And in fact, the odds of it being over 6 degrees really isn’t that low. Not as low as I thought it was.

Toby Ord: That’s all right. And the most extreme number you hear is six degrees. And also it turns out when people say things such as, “We need to do this policy in order to keep warming below, say, three degrees”. What that typically translates into scientifically is in order to keep the median amount of warming below three degrees. But, if we’re wrong about climate sensitivity, it could be five degrees even if we do that policy. So these things are very uncertain, very wide distributions. So I was quite a bit more alarmed by that after looking into it about how little is known about this.

Robert Wiblin: Did your opinions change at all on how resilient we would be to these changes. I suppose at the moment it seems like human ingenuity is winning out. The climate’s heating, but we’re getting so much better at farming all the time that, you know, the amount of food output just keeps rising at a pretty good clip. So is it possible that we will just be able to adapt to this because it’s happening over decades?

Toby Ord: I think so. It would still be much worse than if it wasn’t happening. Just to be clear on that for the audience.

Robert Wiblin: We’re talking here about like would we all die? Would it cause the collapse of civilization, which is a high bar.

Toby Ord: That’s right, it’s an extremely high bar. And while there are a lot of things which could very clearly cause a very large amount of human misery and damage, it’s quite unclear how it could cause the extinction of humanity or some kind of irrevocable collapse of civilization. I just don’t know of any effect that could plausibly cause that. There has been some analysis of if you had very large amounts of warming, such as 10 degrees of warming, would it start to make areas of the world uninhabitable? And it looks like the answer is yes. At least being outside, air conditioning could still work. It’d still be much more habitable, say, than Mars. People are perhaps thinking of setting up settlements. But also that argument though, if you run it through, it really just suggests that the habitable part of the world would be smaller. So coastal areas are much less effected. High plateaus such as Tibet wouldn’t be moved to super hot temperatures. So there would still be many places one could be. It would be a smaller world. And it seems hard for me to think that given it wouldn’t be that much smaller, as to why then civilization would be impossible, or a flourishing future would be impossible in such a world. That just doesn’t seem to have much to back it up at all.

Arden Koehler: So even if it was a third of the size, then one might think–

Toby Ord: I mean if we heard that someone had found a planet in the habitable zone around a nearby star, but it had a lot of ocean and only had a third of the land mass of the Earth, we wouldn’t think, “Oh, well I guess no need to worry about ever meeting anyone from that planet, because it’s impossible to create a civilization on such a planet”. Or, say, it was only the Americas and you didn’t have Africa or Eurasia or Australia that, oh obviously, you never could have had civilization there or you could never sustain it. That would seem kind of like a pretty crazy view. So I don’t really buy the idea that large enough parts of the Earth could be made uninhabitable either.

Arden Koehler: Well at degrees of warming like 10 or whatever, but if we get up really high, I mean, it seems like it’s not–

Toby Ord: Yeah, I looked at these models up to about 20 degrees of warming, and it still seems like there would be substantial habitable areas. But, it’s something where it’d be very bad, just to be clear to the audience.

Robert Wiblin: Most people are dying.

Toby Ord: Yeah, it’d be very bad. But it’s hard to see any particular mechanism that’s being floated as to how it would happen on model. But my concern is more than just the prior probability. Before you even got into these models or got into the science of it, if we make an unprecedented change to the Earth’s climate, perhaps at a truly unprecedented rate over the last 4 billion years, and also to a level which has only a couple of times been reached or something and never been reached with the current configuration of continents or with a species like us and so on, that it does seem like there’s just some plausible chance that this is the end. It’s not that if you imagine kind of appearing before Saint Peter at the pearly gates and he said, “Hey, yes it was climate change” and you’re like, “How could we have possibly known that making these radical changes to the Earth’s climate that haven’t been seen for millions of years could do us in”. I think we’d be looking pretty foolish. It does seem like even if we said, “But our scientists looked at these different pathways and none of them could lead to it”. And you’d think, “Well, it could have been one that you hadn’t thought of couldn’t there”? I mean in the case of nuclear war, for example, nuclear winter hadn’t been thought of until 1982 and 83 and so that’s a case where we had nuclear weapons from 1945 and there was a lot of conversation about how they could cause the end of the world perhaps, but they hadn’t stumbled upon a mechanism that actually really was one that really could pose a threat. But I don’t think it was misguided to think that perhaps it could cause the end of humanity at those early times, even when they hadn’t stumbled across the correct mechanism yet.

Arden Koehler: Because it was just an unprecedented event?

Toby Ord: Yeah, and there hadn’t been that many people searching for such mechanisms and they ended up kind of getting there from thinking about other planets. Planetary exploration made people think about how very different atmospheres worked and to get some kind of data on what it’s like to have radically different atmospheres or dust storms throughout the whole of Martian atmosphere and things like that. And that made them think about this. But you could easily imagine them just never having noticed that mechanism actually since the Cold War ended shortly after that. And so I think that this is just the kind of thing that on priors, it’s such a big change, but I want to stress that my best guess number for the chance of existential risk due to climate change is about one in a thousand over the century. And that’s mainly coming from this kind of, “I don’t know the mechanism, but that our models aren’t sufficiently good”.

Robert Wiblin: Hey everyone, Rob here — we realised we use the word ‘prior’ dozens of times without explaining it. And it is indeed a bit of a jargony term.

Prior is short for ‘prior probability’, and it originates from Bayesian statistics.

For today’s discussion: you can basically think of it as the thing you believe before you see new evidence.

Let’s say you have a standard 6-sided dice, and we’re looking at the probability of rolling a 2 on any given roll. We roll the dice, I see it, you don’t. I ask, what’s the probability that we rolled a 2? Your answer would be 1/6. That’s your prior.

Then I give you a hint: the number we rolled was even. Now what’s the probability that it’s a 2? 1/3. That’s called your ‘posterior probability’, updating on the evidence that the number was even.

A ‘uniform prior’ is when all possible values are equally likely before you see new evidence, or you have no prior information and you can’t distinguish between possible values. So in the case of the dice, there are six different options – so a uniform prior would say that each one has a 1/6 chance. That’s where you might start from before you consider any empirical evidence you’ve observed at all.

Slightly confusingly, In casual speech many people, including me probably in this episode, use ‘on priors’ to mean what we’d expect given our general background understanding of how the world works. So, for example, on priors I’d find it surprising for Taylor Swift to be elected president of the US, because that just doesn’t fit with my general understanding of how US politics functions.

There’s a more in depth discussion of priors and Bayesian inference in episode 39 – Spencer Greenberg on the scientific approach to solving difficult everyday questions.

And if you find the more in-depth discussion about priors later in the episode confusing, that’s very understandable, and you should feel totally fine skipping to the next chapter.

Alright, with that little diversion out of the way, let’s get back to the show.

Robert Wiblin: Yeah. I guess on the thing of the population shrinking… So imagine that the habitable surface of the Earth shrinks, let’s say 80% because we just got some massive warming. I guess putting my economics hat on, my concern is that maybe the population that could be sustained from the food in those areas then isn’t enough to maintain the level of specialization and industrial capacity that we have today. And so you get kind of stuck at some level of economic development where there aren’t enough researchers, there aren’t enough factories to produce say the microprocessors that would need to reach the next level of economic development. You could imagine, I feel like if the maximum number of people who could ever be alive at one point in time was a billion, that then we’d just get stuck technologically.

Toby Ord: It’s possible, although if you run the clock back and look at when there were a billion people, that would seem to set a kind of limit that you’d at least get to there. And then the idea that the bit where we happen to reach that point on our Earth, if you just stopped at that number and stayed there forever, we’d probably imagine there’d be at least quite a bit of extra growth you could have beyond that, particularly as you’ve got a lot of time and also we’ve even now developed a whole lot of these technologies and we would still know how they work and so on. even if we couldn’t devote as many scientists to them and so forth. It could be possible. Like some kind of scale argument like this I think eventually works. If there could only be 10 people–

Robert Wiblin: We’re not going to space.

Toby Ord: Yeah, I don’t think it’s just the case that you need to run things for 700 million times as long before we achieve our current level of economic development. I think that you just can’t get there. So I agree that this argument works at some point.

Arden Koehler: Yeah. I mean you might think the Earth at a billion people, although it would probably get to a much greater level of development than it was when it was at a billion people, if the potential of humanity, and we’re gonna get to this later, is as grand and open and could involve as many huge jumps in technology and ability as maybe it seems like it can right now, it does seem pretty plausible that you’d at least reduce that potential a decent amount by decreasing the chance that we’d ever be able to, for instance, get off the planet Earth.

Toby Ord: That’s interesting. And I’m not so sure that it would decrease it by much. But if it did, suppose it kind of decreased it by half, then it would be half as bad as if it was just an outright existential catastrophe. So that actually would change things a bit in the analysis. So therefore perhaps it is a useful thing to think about. You still have to get a very extreme event I should say before you can get to the kind of point where you’ve reduced the Earth’s habitable surface area by anything greater than a half.

Risk factors [01:10:53]

Robert Wiblin: Yeah. So you introduced this concept of risk factors, which is kind of what we’re talking about here to think about what about things that can’t kill us directly but then can kill us through some indirect mechanism. They might be significant. Do you want to talk about risk factors and how they apply to climate change and other things?

Toby Ord: Thanks. I think this is an idea that there’s some intuitive version of that’s been floating around for a long time, but I wanted to kind of make it a bit more precise so we know what we’re talking about. The idea as I think of it, is that there are certain things which are existential risks. So there’s some kind of threat, such as an asteroid or a supervolcano or climate change where that thing itself could lead to the destruction of humanity or humanity’s long-term potential. Those are the existential risks. But that’s just kind of one way of cutting up the total amount of risk. You could divide it into these silos or something or, you know, vertical slicing of it into different risks. But you could also cut it up in other ways. So you can ask a question such as, “What about if we got rid of the chance of great powers going to war with each other”? The kind of war like the First and Second World Wars and the Cold War perhaps as a cold example of such a war. What if we could eliminate that risk? Like how different would the total existential risk be over the coming century if we, say instead of having the world as it is, we could just press a button and make it so that there was definitely no great power war.

Toby Ord: My view on that is that it would remove something like a tenth of that risk over the next century because we would be able to deal with things in a situation of more cooperation on international levels which is quite important and less building of weapons. And if so, you know, on my one in six thing, then you get something like a one in 60 reduction, say something like a percentage point lowering of the total existential risk could be attributed to the current levels of threat of great power war. So on that idea, I think it’s actually quite an instructive way of thinking about it because I think there’s a tendency among effective altruists who are interested in existential risk to… Say suppose they hear that someone’s working on asteroid risk, they’d think, “Oh wow, you’re really actually doing it”.

Robert Wiblin: You’re nailing the real issue!

Toby Ord: Yeah, exactly. You’ve got this issue that I think is so important, centrally important to humanity’s future and just so important compared to everything else. Whereas if they hear that someone’s working on world peace or various forms of international relations to try to diffuse tensions between China and America, they might think, “Oh, that’s not one of my people” or something. But actually the asteroid risk is very low. It’s far lower than 1% over a century and it seems that we should take seriously these existential risk factors as other things that we could be working on. And one of the nice things about this formulation is that there’s an apples to apples comparison between the amount of risk posed by a risk factor or by a risk. And so you could actually compare them like that. It’s not an illegitimate thing to do. And then this means that you could perhaps focus on various forms of indirect risk which are created by things, even if those things themselves are not existential risks such as great power war. It’s a different category. For any kind of thing that you can imagine, you could just ask, “What if you didn’t have it”? And then you can understand this question of the risk factors, and they don’t have to add to one, which is another kind of issue about them.

Robert Wiblin: Because they overlap?

Toby Ord: Yeah, they effectively overlap, and so if you eliminate one, then you eliminate another one. It won’t do what it says on the tin for the second one. The first question was how much would it lower it compared to if you hadn’t pushed any of these other magical buttons that eliminate other risks or risk factors. So the maths does all work out, but they don’t have to add to one. And before I get to that on climate, I want to stress that this is a dangerous observation, to be thinking about it like this. I think that many things that we traditionally think of as important for society, such as even better education and things, that could be the opposite of a risk factor. A security factor so that if we ramped it up, that we would actually lower risk through that. But there’s this risk from all of that, that a whole lot of people who really cared about this issue just go and do generic work on things that everyone regards as important anyway, and that they work on things that are much less neglected and don’t work on things that are actually much higher leverage because they’re about particular risks which are themselves neglected.

Robert Wiblin: Or I guess things that are risk factors, but are small risk factors and so they shouldn’t be getting most of the attention.

Toby Ord: Yeah, either that it’s a small risk factor or that it’s a very big one, but almost everything actually bears upon it and so you’ll be making a very small contribution or that you can’t find ways to work on it that are as targeted as you can.

Arden Koehler: Is the concern that because once you introduce this way of reasoning where it’s like, “Look, it doesn’t have to be an existential risk for working on it to contribute to reducing existential risk”, it’s sort of seductive to maybe have some motivated reasoning that the thing you were hoping to work on anyway, because you really liked working on it in fact, it’s going to be one of those things?

Toby Ord: That’s the kind of idea. You can imagine, actually, if this goes wrong, just having access to this concept. Whereas I’m worried that without it we’re too insular and too focused on particular silo-based approach to risks and so on. But with it, we grow much larger but it gets too diffuse and the particular kind of specific things that we’re mentioning was where a lot of the value was in really prioritizing and that we lose that. So I do think one needs to be very careful with the idea.

Robert Wiblin: Yeah. Do you want to apply this to climate change now?

Toby Ord: Yeah. So I think that it’s often suggested that climate change might not be so much an existential risk, but that it’s something that would increase other existential risks. So in this case, my terminology would be a risk factor. I think that this is probably right. I think that if we imagine a world, if we could just somehow have the next century but make it so that climate change wasn’t an issue. All of the dedicated altruists who are working on fighting climate change could then work on other things and global international tensions on this would go down and so nations could spend their “altruistic international corporation” kind of budget on something else. So I do think that that could actually be quite helpful. As to how big it is as a risk factor, my guess would be somewhere between… these are very rough kind of guesses, between about 0.1% and 1%. So maybe a bit bigger as a risk factor, but not an order of magnitude. Probably not a whole order of magnitude bigger.

Robert Wiblin: So you think it’s quite a bit less important than war, or a great power war?

Toby Ord: My guess is that it is less important from the perspective of existential risk reduction.

Arden Koehler: Sounded like some of the main mechanisms you were thinking about by which this could be a risk factor is basically that it distracts people. So the budgets of these governments and of organizations and people’s personal careers will be spent on it instead of on other things that you think might be more important ultimately.

Toby Ord: Yeah. I think distracts is kind of right, but it has the wrong emphasis or something because you can think distraction can’t be that bad. Maybe a better way to think about it is this is a stressor on national and international relations and so forth.

Robert Wiblin: And our capacity to solve problems. So our capacity gets used up trying to solve this thing and then we don’t have headspace to think about something else.

Toby Ord: Yeah, that’s right.

Arden Koehler: What about if some moderately high level of warming comes about such that maybe this actually just ultimately falls into the bucket of reducing our capacity to solve problems, but it seems like if health systems and economic systems suffer a lot, it could leave us more vulnerable to things like pandemics, naturally occurring and engineered? Does that seem plausible?

Toby Ord: Yeah, I think it’s quite plausible that it could leave us more vulnerable to pandemics. Also the fact that effectively a larger part of the Earth would be in a tropical environment. So I think that this is something that is certainly recognized that there could be more endemic disease and maybe more pandemics as well. But one thing here is that some people are particularly concerned with something that’s called “double catastrophes”, you know, “Well, maybe that not on its own, but what if you had that and something else”? It’s worth noting what has to happen there. If they got these two small probabilities, say one in 100 and one in a 100, and you think, “Well, each one on its own”, but having them both together ends up being a one in 10,000 event, anyway. It’s what we call second order. And so it’s kind of a bit hard to get these arguments off the ground.

Toby Ord: Like the best version of this argument would say something like, “Well it’s a one in a hundred event, but there’s a one in 10 version of it which would be big enough to interact with other ones and another one in 10 version and together though, that just ends up at one in a hundred again, so it’s hard to actually get these things where they both have to happen to actually be likely enough”. You would kind of need some correlations between them to be really happening, but I’m not sure that this pandemics case induces enough of a correlation. Effectively, if the previous risk level was what I was saying for natural pandemic of about one in 10,000 per century–

Arden Koehler: Well that’s of us going extinct from an actual pandemic, not of a–

Toby Ord: Yeah, but then how much would a world with extreme climate change have to increase that chance by. You don’t have to really multiply it by a lot in order to be making a big difference there.

Robert Wiblin: For that to be the main way that’s having impact.

Toby Ord: Yeah, where you say, “Well, there’s a one in 20 chance that climate change is extreme beyond this level and then if you had that thing happening it would increase this other thing by a factor of 10”, but I think it’s hard to get these numbers to actually work out to be making large contributions. But I could be wrong about that. Maybe I’ve had trouble doing it, but other people haven’t really had that much of a go at it and I haven’t really been challenged on it. So I would be open to people putting together the scariest looking cases of how you could get these things interacting.

Arden Koehler: I mean one thing that what you’re saying suggests is that maybe some of the most serious ways in which climate change or something else could be a risk factor is by impacting the other bigger risks. So you know, even if you think there’s a plausible mechanism for increasing some other existential risk that we can think of, it really matters how big that other existential risk is for how much that translates into being a risk factor.

Toby Ord: Yeah, and so I think it may even be the case that, say the median level of climate change, like the stress that that creates on international institutions and governments and so forth, that that’s large enough to change the risk of, say, the biggest risks such as AI or engineered pandemics, to increase them by a 10th or something like that compared to if we definitely could just not have to worry about all of these challenges of climate change. That could be a mechanism that produces a significant amount of risk as a risk factor. But it’d be interesting to see some robust conversation about that rather than… This is just me kind of sketching out some combinations of numbers, where I find it a bit hard to see how it would really work. But the people at CSER, The Centre for the Study of Existential Risk in Cambridge in the UK, they are quite concerned about this and they think that climate change is a much bigger existential risk than I do. And they think this is largely through risk factors. Largely also through things to do with the collapse of civilization. So, you should talk to those guys.

Robert Wiblin: Yeah, we will. Yeah, we’ve got some episodes on climate change coming up. My anecdotal impression is that people are really drawn to these stories where multiple different things go wrong. And I wonder whether it’s related to this phenomenon where when you add more details to a story, even though it’s making it more specific and in a sense more unlikely because it’s more vivid and people can picture it better, they think it’s more likely, even though kind of strictly has to be less likely, the more things you add onto it.

Toby Ord: That could well be right. I don’t know. I mean, it has the problem though when you know a bit about the heuristics and biases–

Robert Wiblin: You can come up with a something for everything!

Toby Ord: Exactly. If you’re in an argument you can kind of think, “Maybe you’re just biased, because I’ve read a paper which didn’t replicate which suggests that you are–”

Robert Wiblin: You could come up with a very specific word like people have the ‘multiple risk bias’ that I just named and put capital letters on it and now it’s a thing.

Toby Ord: So it could be but it’s also somewhat dangerous.

Robert Wiblin: Yeah. A listener wrote in and was curious to know what you think of the argument that climate change is very in vogue this year, probably next year. It’s a very hot button political issue. So maybe that’s a reason to get on board and push it over the line in terms of getting major policies done. And so the fact that it’s not neglected right now in terms o