Transcript

Rob’s intro [00:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.

Sometimes we take a more indirect route, but today’s episode tackles that question, of the world’s most pressing problems and how you can use your career to solve them, in the most direct way possible.

Last year we published a single page summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift.

All of us added something to it, but the single biggest contributor was today’s guest, our CEO Ben Todd, who founded 80,000 Hours with Will MacAskill back in 2012.

This key ideas page is the most read on the site, and by itself can teach you a large fraction of the most important things we’ve discovered since we started investigating high impact careers.

But when I say single-page summary, it’s perhaps more accurate to say it’s a little book, as it weighs in at over 20,000 words.

Fortunately, it’s designed to be highly modular and many people work through it over multiple sessions, browsing through the articles it links to on each topic.

Perhaps though, you’d rather absorb most of our key ideas in the form of a conversation on this podcast. In which case you’ve hit play on the right MP3, because that’s exactly what this is.

One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we’re least sure about.

If you want to have a big impact with your career, and you say you’re only going to read one article from us, we say to read our key ideas page.

And likewise, if you’re only going to listen to one of our podcast episodes, it should be this one.

One note is that our advice is going to keep shifting, and we’re aiming to keep the key ideas page current as our thinking evolves over time.

Of course, that’s not going to be possible with this podcast, so this represents our views as of November 2019 when we launched the current version of the Key Ideas page, and actually recorded this episode.

OK, without further ado, here’s me and Ben Todd discussing our new guide to solving the world’s most pressing problems with your career.

The interview begins [00:02:18]

Robert Wiblin: Thanks for coming on the podcast, Ben.

Ben Todd: Hey, it’s great to be here.

Robert Wiblin: So this is going to be a pretty different episode from the usual ones because as you might guess listeners, I have a reasonable idea of what Ben is going to say in most cases as we’ve been working on this together for the last couple of years. So instead, we’re going to assume that listeners have either been subscribed to the show for awhile and have heard a few other episodes or at least have skimmed over the key ideas page. And then we’re going to basically introduce all the issues that we raised in the key ideas page and then try to talk about some of the things that we perhaps didn’t have space for, some of the more interesting and controversial issues that are in that topic area. But before we get to all of that, maybe we can start by Ben, could you explain how you ended up running 80,000 Hours and what the history is there?

Ben Todd: Yeah. So what got me involved was I saw Toby Ord give a talk at my college when I was an undergrad studying physics and philosophy and Toby was also at my college and he gave his ‘Taking Charity Seriously’ talk that you can see on YouTube and that basically convinced me pretty much right away… I’m not going to rehash the things that were in there, but it was basically the idea that some charities you could donate to are much more effective than others and the best ones are super effective. So it’s a really worthwhile thing to donate some of your money to those charities. And about a week later I signed the ‘Giving What We Can’ pledge and I think I was the first non-founding member of Giving What We Can. It was right after the launch and then I was just involved with this community in Oxford of people who were trying to figure out the most effective ways to have a positive impact.

Ben Todd: At that time, most of us were students. One of the big questions facing us was what should we do with our own careers? And I was wondering the same. And so I volunteered to give a talk on how to choose a career and how some of the ideas that Toby covered in Giving What We Can might also apply to career choice. Will MacAskill happened to be in the same room and was also coincidentally thinking of doing a talk on the exact same topic. So he suggested we team up and we worked on the talk together actually in my room in Balliol. And yeah, in early 2011 we gave it for the first time. We gave the first talk and that actually turned out to be the most successful talk we’ve ever given. So I think there was about low twenties people in the audience and in the end, over the years after that, about six of them I think made pretty big changes to their careers based on this.

Ben Todd: One of them, a year or two after the talk, joined 80,000 Hours and is still with us. Someone called Habiba decided to take the further pledge, which was a pledge to give all of her income above a threshold, I think probably around 25,000 pounds to charity, stayed involved and actually recently switched to working as a senior administrator at the Future of Humanity Institute. So yeah, these kinds of changes convinced us that we might be on to something. Some people in the audience such as Richard Batty actually approached Will and me and said maybe we should start an organization around this idea, in addition to Giving What We Can.

Robert Wiblin: Yeah, I wasn’t around there at the time. Did you guys spend very long debating whether you actually wanted to start an organization and kind of who should run it?

Ben Todd: Probably back then, we were a bit too cavalier with starting organizations. Probably early on we set up too many things and the Oxford community should have just focused on making one project go really, really well though obviously it turned out well in my case. So yeah, it didn’t seem like we debated it much but I mean of course we started it just as a volunteer side project while we were still students for a whole year and then we ended up getting pretty major press coverage on the BBC about the idea of earning to give and we grew our blog quite a bit and then we managed to raise a couple of hundred K of funding, which was enough to to hire full time staff. And so it was that over that year that all those, all that traction that we were getting was, at that point it seemed obvious that we were onto something.

Robert Wiblin: Yes. At that early stage, 80,000 Hours, I suppose it was called High Impact Careers first and then 70,000 Hours and then 80,000 Hours.

Ben Todd: Yeah. Not many people know that we were briefly called 70,000 Hours. Yeah. In the summer of 2011 it was High Impact Careers and we started with a beautiful red and black website. Then we rebranded to have our color be magenta, which most people called pink, and then we switched, well, we switched to the blue actually more than a year or two after that I think.

Robert Wiblin: Important design issues there. We should dig up the website actually from 2011 on archive.org I think that would be a pretty funny to look at. Anyway, early on 80,000 Hours was doing all kinds of things. It had a blog. It was running all sorts of events, but I guess over time it kind of gained a bit of focus and actually stopped using just interns and actually hired some people.

Ben Todd: Yeah, I think like most startup nonprofits we made the mistake of trying to do too many things at the same time and we were kind of thinking of ourselves partly as an advocacy campaign around certain big ideas in effective altruism. We were partly thinking of ourselves as giving one-on-one careers advice, partly doing lectures, partly doing research into what to actually say about these topics and all of these activities are pretty different. And for a small team it’s way too much. So the first few years we just really tried to focus the concept down a lot and eventually we focused on just what we’re about is giving people information and support to help them change their career. And now we just do two main things, which is online content and one on one support.

Robert Wiblin: You were at some point considering doing Earning to Give yourself then decided to found a nonprofit instead which I guess may have been kind of contradictory to our advice at the time. How did you end up making that decision?

Ben Todd: Yeah, I mean 80,000 Hours early on got very associated with Earning to Give and our very first lecture, the structure of that talk was the arguments for Earning to Give and why that could be better than most typical social impact jobs. But then the second section of that talk was about how you might be able to do even more good than Earning to Give, and we covered our research and advocacy careers, and government jobs and threw out a whole bunch of other ideas that seemed like they might turn out to be better. And so even at the very start we never thought that Earning to Give was typically the best option all considered. And yeah, that was very obvious in my choice where I was choosing between Earning to Give and working at 80,000 Hours. And so yeah, I had a job offer to work in investment, in finance, which is something I’d been interested in actually since I was a teenager.

Ben Todd: So I think that would have been pretty interesting. I would have probably enjoyed it. But yeah, we really got convinced by that time by what we call the multiplier argument, which is just this idea that if you could just change one person’s career and they’d go and do something really high impact, then that’s kind of having as much impact as you could have in the whole rest of your career. And so if every year you could help one person switch into a high impact career, then you’re having way more impact than you could yourself directly. And we actually thought we could do much more than help one person change career per year.

Ben Todd: So we thought it through and in my case, just carrying on with 80,000 Hours would be the highest impact thing to do. Actually back then we hadn’t really thought of the concept of career capital and so I just totally ignored that in the decision. But actually I think, just by luck, I actually turned out to get better career capital by doing 80,000 Hours than I would have in finance just because I’ve met lots of other people who want to focus on social impact and I learned a lot about how to run a social impact organization. And so actually maybe if I’d known about career capital back then it would’ve led me in the wrong direction. But fortunately I was doubly wrong and it canceled out.

Robert Wiblin: Career capital for those who don’t know is this term of art we use to describe everything that puts you in a better position to have a bigger impact in the long-term, including skills, connections and credentials.

Common misunderstandings of our advice [00:10:16]

Robert Wiblin: I guess part of the reason that we’ve written this key ideas article is to try and make our ideas super, super clear and very upfront because we found that sometimes people don’t quite understand what we’re saying, or they read some part of our advice and don’t understand the big picture and as a result, they end up doing something that we wouldn’t advise if we were talking to them one-on-one. What are examples of classic or common misunderstandings that people have had of our advice or cases where we’ve communicated our ideas in not such a good way and so that’s resulted in people making decisions that we wouldn’t necessarily have advised if we’d spoken to them one-on-one?

Ben Todd: One thing we’ve realized in recent years is that at least the core of our audience really do just want to get the big ideas directly and explained with a lot of nuance. I think partly this podcast has shown there’s an appetite for that. And so we decided to recently replace our career guide online with what we’ve called the key ideas series. The career guide is still up, but we now make the key ideas series the most prominent one and it just tries to lead with our most distinctive ideas quickly, such as longtermism, existential risk, things that make us the most different from other people who might give you information on what to do to have a social impact. Another part of why we want to do that is because we thought some of our most important concepts are actually getting a bit lost in the existing career guide.

Ben Todd: And so there’s been a couple of misunderstandings we think we’ve been coming across recently and the one that’s most on our mind at the minute is just how people think about career capital and in particular a lot of people getting the idea from our content that typically the best thing to do is to work in consulting early career or some other prestigious corporate job like that rather than trying to do something more directly relevant to a pressing problem. Another option is we’ve seen people who just have great direct impact options right away. They could do something like go and work at a top think tank on AI policy or they’re thinking, “Maybe I should go and work at PwC first to get better career capital”. And it just seems better to push on in policy because even just from a career capital point of view that’s giving you a lot of other options in addition to being closer to having an impact.

Ben Todd: One example we came across recently was a bit of an unusual one, but it was someone who had the realistic possibility of becoming a magician and maybe landing a major TV show in India and they were choosing between that or otherwise consulting. And it actually seemed to us like the magician case was more exciting because it would mean that, well firstly that’s kind of more of an impressive level of achievement. You might stand out more, but also, you’re learning all about media, building up an audience and you’re also doing this in the context of a really important country. So those kinds of things seem like you might well get better career capital from actually becoming a magician rather than consultant. Whereas I think most people’s reading our advice would be like, in general, a consultant is the thing that we’d want people to focus on.

Robert Wiblin: Yeah, it somehow kind of seems more serious and so they might think, “Oh yeah, that’s the thing that 80,000 Hours would recommend”. But I guess we’re very keen a lot of the time to have people explore interesting things that provide unique opportunities for them that other people don’t have. Of course it’s somewhat hard to communicate that as we’ve mentioned on the show before because of course we can’t have a priority path as ‘magician in India’ anywhere on the site because that’s such an idiosyncratic opportunity. But yeah, I guess that is a common misunderstanding that people think that we want to pigeonhole them into particular common career tracks rather than encouraging them to do things that are particularly promising opportunities for them.

Ben Todd: Well yeah, so you always have to weigh your personal fit and comparative advantage against how good the path is in general. And if you have sufficiently high personal fit in a path, it can can outweigh that area generally being less promising.

Robert Wiblin: We use this term comparative advantage which is a little bit jargony. I guess comparative advantage is about your relative goodness at something relative to the other people who are in your organisation. So you could end up doing something which you are in some sense absolutely less good at but you’re filling some gap relative to what other people can do who you’re trying to coordinate with.

Ben Todd: Yeah. And we have a whole article about comparative advantage to define it a bit more clearly.

Robert Wiblin: I guess another mistake that I sometimes see is people giving too little weight to choosing the right problem to work on. If you ask them, they might think working on one problem, problem A is like 10X more effective than working on problem B, kind of all else equal. But then, they’ll end up working on problem B just because they managed to get a job in that sooner, or they feel that they have a somewhat better personal fit for that or maybe they just haven’t spent that long thinking about it, and it feels that they’re potentially sacrificing a lot of gain there if they actually really researched which problem that they want to work on long term and made a conscious decision about that.

Ben Todd: Yeah, I mean it’s very hard to wrap our head around these massive differences in impact that might be possible and this is one of the big motivations for 80,000 Hours which is just it seems like some options might just have more than a hundred times more impact than another. And so that kind of means that, say, 10 years working on the more effective area could be equivalent to a thousand years working on the second. So really, really huge differences.

Ben Todd: Yeah. Sometimes it’s easy to maybe intellectually accept the case, but then when it comes down to specific options, kind of our intuition is more that things are roughly similar. We actually did a survey of this where you ask people how much they thought charities differed in effectiveness in terms of saving lives and I think the difference was only 1.5.

Robert Wiblin: Yeah, it was really small differences. I think people thought that the best charity versus the median charity work in the developing world was maybe twice as effective. Whereas I guess we would think of small like a hundred times or at least like tenfold.

Ben Todd: Exactly. Yeah. And this gets us to the whole idea of 80,000 Hours where if you can just spend one percent, if you can make your career just one percent better, it’s worth spending 800 hours doing that. If you can make your career a hundred times better, it’s worth almost spending your whole career saving up for that last six months.

Robert Wiblin: I’ve lost count of how many times I’ve said that over the years.

High level overview of what 80,000 Hours generally recommends [00:16:14]

Robert Wiblin: All right. So before we get into the body of the key ideas article, maybe let’s just start by mapping out the process that, at a very high level, we would recommend that people generally follow if they want to have a really high impact career because that will kind of map out the structure of the conversation to come.

Ben Todd: Here’s the very high level overview of the overall process we generally guide people through when we’re advising them on what their long-term plan should be and what the next step should be. And it’s also roughly the structure of the key ideas series itself, it moves through things in this order. So the first question we encourage people to think about is the question of which global problems are most pressing and then which ones you want to work on. Once you’ve got that, then it’s about coming up with career options that might help you especially effectively address those problems. So that’s like figuring out key bottlenecks facing those issues and how you might resolve them. And then from there you want to be trying to get a short list of promising long-term options together. So that’s the kind of idea generation phase. Then we have a strategy section, which is then more about how do you narrow those down?

Ben Todd: And so a really big question there is just which one’s the best personal fit for you? Another big question is whether it’s more effective to first gain career capital? Will that accelerate your career more or should you just enter directly into one of these long-term paths? And then finally there’s the section on decision making. How would you actually figure out, once you’ve got a short list, what your next step should be? And so the key thing is to try to identify your key uncertainties about these different options, investigate them, and then we recommend that generally people take what we call that upside option, which is the option that would be the highest impact if you performed towards the top end of your expectations.

Ben Todd: That said, doing that while having clear backup plans. And then finally we roughly recommend that people review their career about once a year or whenever they face a major decision point, and a little bit less often if you’re more confident and a little bit more often if you’re early career or you’re more uncertain about what to do. And then the rest of the time, just focus on doing as well as you can in the option that you’ve chosen.

Robert Wiblin: All right. So that’s a lot to digest, but we’ll gradually work through each of those phases and explain how we would imagine that someone could do a good job with them.

Key moral positions

Robert Wiblin: All right. So the key ideas article starts with a very foundational question of our moral views. So yeah, how would you summarize the key moral positions that 80,000 Hours takes that are relevant to people’s career decisions?

Ben Todd: So yeah, we start the key idea series with, we just state some of the key values behind our advice. We don’t really justify these things because that gets into a lot of complex debates in philosophy. But we just thought it’s important to try to clarify these things and be transparent about our values. And the kind of question you can have in mind here is just what does it actually mean to make a difference? What are we taking that project to mean and then how does that then carry through into the rest of our advice? So yeah, we highlight a couple of key principles on the key ideas page.

Impartial concern for welfare [00:18:17]

Ben Todd: So the first value we highlight is the idea of impartiality. And so we take 80,000 Hours to be the project of trying to do good from an impartial perspective.

Ben Todd: And yeah, exactly how to define impartial is, again, a very tricky issue and it’s debated exactly how it should be defined, but very roughly you could think of it as we want to treat everyone’s interests equally. So that means we don’t want to privilege people based on their particular nation or social group or ethnicity, where they are in the world, even potentially what time they live in. There’s a tricky issue then of how do nonhuman animals fit into that and yeah, we’re unsure about exactly how their interests should be weighed. But again, we would say their interests would be counted to at least some degree though we’re unsure about the exact weighting.

Robert Wiblin: I guess one approach would be that their interest should count to the degree that they’re capable of having interests. So you might think about different species potentially have different strengths of interests because they’re designed differently. That would be one way of reconciling impartiality without giving the exact same weight to a fly as a human.

Ben Todd: Yeah. Though, like I say, those are all controversial. So yeah, we’re trying to treat everyone’s interests as equal. Then there’s a question about what are people’s interests? Like in what way are we actually trying to help them from that perspective, and broadly we say that we should be roughly focusing on increasing welfare. By welfare we just mean a very broad thing about how good people’s lives are and how happy, long, fulfilling, and so on their lives are and how much they can avoid needless suffering. Again, there’s lots of debates about exactly what welfare should consist of. Ultimately, does it mean satisfying people’s preferences? Is it just a matter of people having more positive mental states than negative mental states? There’s other views as well. We’ve actually found that these questions don’t actually seem to make that much difference to our choice of problem areas, which is the kind of more decision relevant thing that we’ll come onto later.

Robert Wiblin: Of course people are going to potentially value other things other than just welfare, impartially considered, but I guess you’re saying our career advice is going to be focused on that because that’s potentially something that a very wide range of people care about to some extent and will have as one of their career goals.

Ben Todd: Exactly. Yeah. I think almost everyone cares about doing good in this impartial way to some degree. Very simple toy example where you might see this is if you could push a button and it would cause a stranger to suffer. Most people would think, well, I shouldn’t do that. Even though I’ve got no particular attachments to that person, I don’t know who they are. And that kind of shows that we do actually care a bit about the interests of strangers and just people in general. And I think many people also share an interest in welfare. Again, there could be many other things that matter, but all else equal, generally if people have better lives where they’re more fulfilled and more happy and they have less suffering, then many people can get behind that as one important goal to push towards.

Ben Todd: Yeah. So like we say, we didn’t think that’s the only thing that matters, but it’s an important goal. And it’s also one where we think just given how the world is right now, then just potentially, due to technology and all the amazing wealth that we find ourselves in and potentially the moment of history that we’re in, our actions today actually can have these really big effects on the long-term wellbeing of large numbers of people. And so it’s also a particularly important factor to focus on, just given how the world is right now.

Longtermism [00:22:32]

Robert Wiblin: Okay, so that’s impartiality. The second part of our moral philosophy is longtermism, which is this emerging set of moral ideas. We’ve had several episodes where we’ve discussed longtermism in some detail and even one episode that was over two hours, which was almost entirely focused on it. And even that didn’t manage to consider all of the different possible motivating reasons and possible objections and the back and forth about that. So longtermism is pretty hard to boil down but one kind of possible one sentence description would be that basically the most important moral consequences of our actions are the impacts that they have. Like more than a hundred years into the future, possibly more than a thousand years into the future. Now I’ll just try to lay out how you could potentially get to a conclusion like that.

Robert Wiblin: The first observation would be that, like many philosophical views imply, the welfare of people in the future is at least an important consideration. It’s not clear necessarily whether the welfare of people in the future is as important as the welfare of a person today, but at least it should be given some weight. The second thing is to note that the future could be extremely long, so the universe is going to be around for a very long time and also just the universe is extremely large. There’s a lot of space, a lot of energy, a lot of matter, and it could be converted into things that are valuable and that could support many people living for a very long time. And potentially another thing is that the lives of people in the future could be a whole lot better than they are now, inasmuch as science and technology over the last 200 years has probably improved the welfare of people and that we might expect that that will continue for at least some period of time.

Robert Wiblin: And then the third thing would be that there’s things that we could potentially do today that could improve the welfare of people who will be alive in the future and that we could have somewhat predictable effects and that they would be positive rather than just completely random. And one way of demonstrating that might be you can imagine that if an asteroid were heading to earth and it hit earth and everyone died, then there’ll be no people around in the future and their lives would just be completely ended. And if we could prevent that from happening, say that it’ll be clear that that would have quite a persistent impact and could result in humanity surviving for hundreds, thousands, maybe even longer than that, thousands of years more.

Ben Todd: Maybe one of the things to emphasize is that each of these three premises could easily be wrong. Maybe there won’t be that many people in the future. Maybe we can’t affect them. But the key thing is just that because there’s at least some possibility that there could be so many people in the future and there’s some possibility that our actions could affect them. Those two things mean that the argument kind of goes through and in particular just because the stakes are potentially so large, it could well be the dominating consideration.

Expected value and counterfactuals

Robert Wiblin: Yeah. So one kind of latent philosophical assumption that’s going on here is the idea of expected value. That we focus on maximizing expected value, which is the size of the reward times by the probability of getting it. So basically we always think that if something is good and you’ve got like a one percent chance of getting it, it’s twice as good if you have a two percent chance of getting it. Twice as likely is twice as good or twice as bad.

Ben Todd: Yeah. And then that’s an important component of how we end up with longtermism.

Robert Wiblin: Yeah. So here we’ve got a group that, in expectation, matters to some degree which is potentially extremely vast in number, like much more vast than the number of people who are alive today and we can potentially affect them through actions that we take now that will have kind of predictable consequences. And that could potentially get you to this conclusion of longtermism, that the most important effects of our actions are those that affect things in the world in more than a hundred years time. Of course there’s many objections that one could potentially field. So people might say, “Well actually, people in the long-term future don’t matter at all morally. They’re not moral patients as far as we’re concerned”. Which conceivably could be true. But I think we think that is fairly unlikely and not true in expectation.

Ben Todd: Probably the most important philosophical position that holds this is called person-affecting views. We’re not going to cover that here, but you can read more about some of the reasons for and against those views in our article on longtermism. And also you can listen to our podcast with Toby Ord.

Robert Wiblin: Maybe an even better source is actually the podcast with Hilary Greaves from the Global Priorities Institute. She’s written this really neat summary paper about population ethics, which goes into different person-affecting views, which I guess kind of march under the banner of, “We’re here about making people happy rather than making lots more happy people”. And she discusses some of the pros and cons of them. And I guess, also why she, I suppose like us, ultimately doesn’t find them to be the most plausible theory. Although I suppose they are, as you said, among the most popular philosophical views that people take which I guess would be a big challenge for longtermism potentially.

Robert Wiblin: You might also think, well, it’s just so unlikely that we could survive into the future so then in fact, there aren’t going to be many people in the future. But that just seems kind of empirically wrong. It seems there is a decent chance that humanity could survive for just a very long time. There could be a whole lot of people. Maybe a third, more complicated objection that some people will raise is just that yes, we have very big effects on the long-term future, but they’re just so unpredictable that it’s very hard for us to improve how well the future goes.

Robert Wiblin: Because it’s just so chaotic and random that one can’t really make it better through any deliberate actions that we take. And I think there’s something to be said for that. But again, there’s this expectation argument that it’s possible that the effects of actions are just completely random. Like whether they’re positive or negative or neutral. But I think there’s a reasonable chance that in fact there are things that we can do that would make the future better. Like, reduce the risk of a nuclear war–

Ben Todd: And we’re going to actually try and consider specific examples and I think the way to just assess that is just to consider specific proposals and whether they seem convincing or not.

Robert Wiblin: Yeah. And I guess, especially since we haven’t been looking for opportunities to do this for all that long, it already seems that we have some pretty plausible things that we could change about the world today that would potentially make the long-term future go a whole lot better. And then there’s even more philosophical and practical objections to all of this, which we don’t have time to go through today as I said, but that kind of gives a really quick synopsis of what longtermism is.

Ben Todd: Yeah. I mean one other big motivating idea in terms of why to prioritize taking this perspective is the neglectedness case. Our existing political systems and economic systems really seem to be optimizing a lot for the interests of people who are presently alive because we’re the people who have the votes, we’re the people who have the money, and so therefore are driving what happens in the world. Whereas the interests of future generations are often entirely neglected. But if we imagine that everyone in all of history had a vote, then they would all massively outvote presently alive people and have a big say over what happens. And because they’re being hugely neglected, that means we should actually expect there to be pretty effective ways of helping people who exist in the future.

Robert Wiblin: So yeah, I was making the argument, a big scale of the issue there because there’s so many and that it’s tractable that there might be things that we could do now that could change the long term. And you’re completely right. It’s very neglected by our economic system, which mostly makes consumer goods for people who are around today. It’s neglected by the government because people who don’t exist yet can’t vote, can’t influence them. And I guess it seems it’s even neglected kind of by the nonprofit or civil society or charitable foundations. I think, in part, just because many of them haven’t been–

Ben Todd: The issue is it’s a new idea.

Robert Wiblin: Yeah. It’s kind of a new idea. So it hasn’t been brought to people’s attention. So there’s potentially a lot of low hanging fruit that’s been left there by other people.

Ben Todd: Yeah. This position only really got staked out in the last couple of decades by philosophers. Derek Parfit was a big figure in that. And then Nick Bostrom and now several other philosophers at Oxford and other places. So overall you might think of longtermism as the idea that when we consider the question of ‘how to make a difference?’, we should just simplify that down to the question of ‘how can we best help the long-term future?’. And we’re not entirely sure that perspective is correct, but we’re kind of convinced enough by the arguments and also because that perspective is so neglected that this is one of the most promising areas to explore and potentially work on for someone who wants to maximize their impact from an impartial perspective.

Robert Wiblin: Yeah, it’s hard to find really high leverage opportunities to improve the world that someone isn’t already taking. To some extent, sometimes you have to make bets that people are making a mistake about something or that a not universally accepted view might be right. And I guess in terms of the bets that we could make that are on a view that isn’t universally accepted that seem especially promising and especially action guiding and that would give us concrete ideas for things that people could do with their career to have a really big impact, longtermism stood out as seeming unusually likely to be something that’s much more widely accepted in the future than it is today. And it could be a framework that would allow us to find many opportunities for our readers to have a big impact.

Ben Todd: One other common objection people give to working on longtermism is just the idea that it’s very hard to get motivated by it because it’s such an abstract idea. And I think that was true for many of us as well when we first got interested in this area. But I think over time, kind of thinking more about the specific problems that we can address, it’s become more motivating. One big thing for me was recently reading Daniel Ellsberg’s book about nuclear war. The level of insanity that is exhibited there where the present generation is just taking this massive gamble with the whole future where we’ve set up this system by which with almost no notice we could just all go up in flames due to this kind of crazy machine that we’ve built. And by doing that we would not only potentially cause this disaster in the present generation, but we might actually put all of history at risk.

Robert Wiblin: Yeah. All of these people who have no say who are just going to being completely screwed over by us because of our own incompetence or selfishness.

Ben Todd: Another thing was a recent paper about the Fermi paradox that Anders Sandberg explained on the podcast where he actually argues that there’s quite a good chance that our current civilization is the only sentient life in the universe. And again, just the idea that the universe might be entirely empty except for this one planet. And then again, there’s all these ways in which we’re just playing roulette with that entire future, I found really motivating.

Robert Wiblin: Yeah. I’m somewhat motivated by this image of a totally unnecessary nuclear war, all of the just totally unnecessary destruction that that would entail. But I think I’m also motivated on a day to day basis by a more positive vision of the future. So I sometimes imagine that if you could transport someone in time from 1500 to today. I think they’ll just be like, “This world is incredible”. There’s so many people and you’ve got laptops and medicine and television and all these drugs that improve your health and science. You understand so much about the universe that we didn’t understand.

Robert Wiblin: And then I just think that if we were able to transport ourselves 500 years into the future and civilization manages to stay on track and continue advancing in the same way that we have in the past. We would similarly just be astonished by how incredibly amazing humanity’s accomplishments are and probably how amazingly good their lives are. Because they just have so much more capability to make their lives so amazing and get rid of all of the problems that we have to worry about today. And it just seems so sad for us to, through our short-sightedness, snuff out this potentially amazing future when it’s completely unnecessary.

Ben Todd: Yeah. I mean one response to that is there are some ways in which the world has probably become worse in the last couple of hundred years, where factory farming is one of the examples we’ve talked about before. I just wonder what would you say to that?

Robert Wiblin: Yeah. I mean it is possible that on balance, the world actually has gotten worse in some ways. If you imagined it could be that the suffering on factory farms is so severe as to outweigh the benefits to people. I guess if you took that view then you might well just want to focus on making sure that those things don’t continue into the future. And I do think that work to eliminate factory farm or to make sure that factory farming definitely doesn’t persist forever is really valuable. On the factory farming case specifically, I think it’s very unlikely that it will persist forever because it just seems it’s a way of making food that we can already see that it might well become technologically obsolete within our lifetimes, let alone over 500 or a thousand years when we’ll be able to make food in just so many better ways.

Robert Wiblin: And just more generally it seems as if people have become more empowered to shape the world with more and more flexibility because you just have more technology, by and large they get rid of these really negative side effects when they can. Yeah. There’s ways that that might not happen, which is for example a reason to kind of spread good moral values so that people would just find it intolerable to have this suffering created for small benefits to them in the future. So that would be another possible longtermist approach that would be trying to shape those values so that the future goes better and try to eliminate the possibility of these really negative things existing for very long periods of time.

Ben Todd: Yeah. I think it’s just important to note that the case for longtermism doesn’t rely on the future necessarily being on this good trajectory where it’s getting better and better. The key thing is just ultimately, could be either much better futures or much worse futures and might there be something we could do about it? And in that case then you should be a longtermist. It might affect what longtermist projects you want to work on rather than whether to take a longtermist perspective in the first place.

Robert Wiblin: Yeah, it’s interesting with the factory farm case. So someone who is specifically not a longtermist who’s worried about factory farming might think we’d have to shrink factory farming right now. I think someone who is taking a longtermist perspective on that, might be like, “We have to make sure that we don’t get locked into a situation where factory farming persists forever, or doesn’t persist for thousands of years in the future”, which might lead to a slightly different edge on how you would tackle that problem. Although it could ultimately cash out on just trying to make beyond burgers really amazing.

Ben Todd: Just going back to motivation, I think one other quick point in terms of what actually motivates me day to day, there’s a lot of things like writing this specific article which is useful or advising this person was motivating because I helped them figure out their career decision. And I think a lot of jobs are like that. Even if your long-term aims are quite abstract, often there’s much more concrete things day to day that also provide a type of motivation even if you’ll never see the beneficiaries of your actions.

Moral uncertainty and moderation [00:35:39]

Robert Wiblin: All right. That was a bit of a diversion there into what actually motivates us to take an interest in longtermism on a practical level, but let’s move on to the third aspect of our ethical views, which is moral uncertainty and moderation. How would you describe that one?

Ben Todd: Yeah, I mean, this is actually just a bunch of different things pushed together, but the general idea is we don’t think you should just kind of be extremist about one moral position and one set of values. And there’s a couple of different reasons for that. One is just the kind of pragmatic reason that often people who are extremists about their values, it seems to actually just cause a bunch of harm. It doesn’t actually lead to the best consequences in the long term.

Ben Todd: If you look more generally at the track record of people who’ve just really tried to push one value system at the expense of others, it doesn’t look like a great track record. So there’s just this, “Well, this isn’t actually the best way to go about things” argument. There’s also a coordination trade argument because we want to be able to work with other people who maybe don’t exactly share our values, but have a bunch of overlap, or even people who just have different values. So again, it’s better to make some concessions to what is valued in general rather than just, again, trying to basically screw other people at every turn just to slightly get an edge on what you care about.

Ben Todd: Then there’s actual moral uncertainty itself, which was covered in the podcast with Will MacAskill. Just that we’re actually very uncertain about ultimately what matters. So we think that suggests we should consider a variety of different perspectives when it comes to what’s valuable and then try and pursue actions that seem reasonable on a balance of perspectives, at least ones that don’t seem terribly wrong on one perspective.

Robert Wiblin: This is an interesting one, because there’s so many different considerations pointing in this direction that almost each seem very strong and potentially quite decisive and then you throw them all together and it seems like a pretty robust general conclusion.

Ben Todd: Well yeah, I mean figuring out exactly what moral uncertainty implies is very challenging. But yeah, we kind of just boil it down to it seems like in general there’s an argument for moderation, and I think one big thing is just not doing things that seem clearly wrong from a common sense point of view. Even if your intellectual case for impact backs them up. Yeah. That “Don’t do crazy things” principle seems pretty reasonable.

Robert Wiblin: Yeah. So you’ve got this tendency probably that people don’t take enough account of philosophical uncertainty about these meta issues. Then we’ve also got that I think humans are not inclined to take full account of all the empirical uncertainty about all of the things that they might not know that might be relevant.

Robert Wiblin: And we’re also probably not inclined to take disagreement of other people in the world as seriously enough. We kind of tend to just go with our gut judgments, even if other people who seem as informed and smart as us disagree, we don’t go, “Well, someone else disagrees with me. Basically, I should just put equal weight on their view as on my own.”

Ben Todd: Well that’s almost a fourth argument it seems, which is something along the lines of people aren’t as epistemically humble as they should be, so you need to correct in that direction a bit more by being a bit more humble than intuitively you might think.

Robert Wiblin: Yeah. Then there’s this issue of if you’re just completely dogmatic about your own views and not interested in compromising with the rest of the world, then well, the rest of the world will hate you and potentially try to stop you, try to thwart you because they’re going to view you as kind of a hostile entity.

Robert Wiblin: Then there’s even another one, which is we want to create a culture internally among 80,000 Hours and the people who listen to our advice, and I guess the effective altruism community which we’re somewhat a part of, where people coordinate well and get along with one another and are friendly rather than backstabbing at every opportunity when they think that it’s advantageous to them.

Ben Todd: Yeah. Under pragmatic reasons, that’s one mechanism by which you get them.

Robert Wiblin: So yeah. Moral uncertainty, modesty and reasonableness seem pretty robust conclusions. Are there any philosophical schools that you think are at odds with these ideas that where it’s just clear that they wouldn’t be interested in our advice because they disagree too deeply?

Ben Todd: I think when people just debate morality in general you always focus on the interesting disagreements between positions and people don’t spend nearly as much time kind of talking about, “Well, we all agree that if you can save a life with a low cost to yourself, that’s a good thing to do. And if you can save two lives for similar costs, then that’s an even better thing to do.” But then that’s all you need to accept to think that you should take a lot of the kinds of things we’re going to cover super seriously. And yeah, it’s just because these big differences in impact aren’t intuitive, we tend to in practice not focus on them very much, but on an intellectual level, we can all agree they matter.

Robert Wiblin: I guess actually another topic that we haven’t taken a view on, and which doesn’t seem terribly decision relevant, is kind of demandingness, or whether it’s obligatory to help people or whether it’s just a good thing to do. And I guess there’s disagreement on the team and also it doesn’t seem to really affect what we should say or even really what people ought to do.

Ben Todd: No, I agree. But we’ll come later onto that question and actually, exactly how demanding you think morality is, turns out to not be quite as an important decision as at first it looks.

What are the most pressing problems to work on? [00:40:33]

Robert Wiblin: Okay. So that kind of tackles the really broad philosophy. Let’s narrow down to the next section, which is what are actually the most pressing problems to work on, which as we’ve said we think is a very important issue and potentially something that people don’t place as much weight on as they ought to. What do you talk about in that section?

Ben Todd: Yeah, so again we’re coming at this from a longtermist perspective and then the question is which issues are most crucial to address today in order to help the long-term future? And I mean this is really an emerging area of research, like I said, so we’re not clear what the answers are going to be, but kind of the general view that people are coalescing on is basically what you want to look for is these crucial moments in history which might be possible to change how the long-term future turns out.

Ben Todd: Phil Trammell has recently been calling these ‘hinge moments’. Though you should actually think of them more as ‘hinge periods’, because it’s not like a specific moment. Other people hate that nomenclature. Some people have suggested it should be called ‘pivotality’.

Robert Wiblin: I don’t like that. I think ‘critical junctures’ is one I’ve heard in history.

Ben Todd: Okay. Yeah. But the idea is most things probably just kind of wash out in the long term. We’re not really sure what persistent effects are going to have. Then it seems there are certain categories of things where there’s a kind of element of lock-in or irreversibility and there’s basically a point in history where something could really go one way or the other way and be like that for a very long time. That’s what you’re looking for in a hinge.

Ben Todd: So then from that there’s basically, we end up with two camps of longtermists. The first camp, I’m going to call them the ‘targeted existential risk’ camp. And they’re people who think, “Well, there are these things which look like they might be hinges.” Basically there are potential existential risks, so that’s something that could dramatically decrease the value of the future, indefinitely.

Ben Todd: A really clear example would be if an asteroid strike ended civilization, then obviously the value of the future is now zero from then on unless life evolves again on earth or something like that, but you know it’s at least a very long-term setback, probably. So yeah, if you think there might be some chance of an existential risk happening in our lifetime, that seems like it could easily be the crucial thing to work on from a longtermist perspective, so in how we prioritize causes and we think issues around AI safety, some of those might cause existential risks. Then secondly, global catastrophic biorisks. Thirdly, nuclear war. Forth, climate tail risks. Those are our kind of areas that we’ve selected within this category where it seems like maybe in the next couple of decades there’s important work to be done in reducing those risks.

Ben Todd: Then the second camp of people is quite a diverse camp, but basically in this camp, either you think, “Well, we don’t know what the hinge of history is yet. Maybe it’s those issues, but maybe there’s other issues that are even more important”. Or maybe you think you do know what the hinge of history is, but maybe it’s going to happen a very long time in the future. So you’ve got these two dimensions, you’re very uncertain, and the hinge might be a long way off. So there’s not really a name for this category yet. One name that’s being kind of used tongue in cheek is that that’s the boring longtermists and the boringness comes from the fact that you’re saying now is not an especially important moment in history. Now is just like any other moment, so we’ve no reason to think there’s a key hinge moment on our doorstep.

Ben Todd: Whereas the people who are really focused on AI, that’s more saying, “Well, the hinge might well be here. It’s AI.” So that’s where the name comes from. But you know, people also call it patient longtermism or maybe broad longtermism. Hopefully we’ll settle on a good name soon. So yeah, within that camp there’s then three main categories of problems that people work on, or issues that people work on. So if you’re just really not sure what the hinge of history is, then one option obviously is to just do more research and try and find that hinge. So that leads to global priorities research being a key area within that. And just in general, even another way to think about global priorities research being important is that we might also just be wrong about this whole longtermism thing. So for either reason you might think just doing more research into these big picture questions is really important. So, that’s the first one.

Ben Todd: The second one is you could do some kind of capacity building, so you could basically help grow the resources of people who care about the long-term future such that when this hinge moment does come up in the future we’re in a better position to deal with that. And that’s why we kind of see the effective altruism community, and building that, as really important. If we were convinced that it was, say, all about AI, or all about biorisk, then we would probably just focus on building the biorisk community or the AI safety community. But the point is that we might be wrong about what the key problem is, so by building this community of people who are interested in working on a wide range of issues and flexibly switching, we’re able to get there to be more people interested in working on whatever the most important problem turns out to be in the future.

Ben Todd: There’s also other ways you could build capacity. One option is just to try to save money and grow that as much as possible. If you think interest rates are kind of set by everyone else and everyone else is very impatient, they kind of want to get money to benefit themselves fairly soon, so that makes interest rates quite high. But if you’re just happy to wait and play the long game, you can just save your money and gradually compound it faster than the world economy grows and so thereby have more and more influence over time, and then in 1000 years your foundation will be able to have a big impact on the long-term future.

Robert Wiblin: Yeah, if it’s still around.

Ben Todd: Exactly. So it’s pretty unclear whether that’s a practical proposal then. There could be less extreme versions of that, that are sensible. You could also just kind of focus on building your own career capital or say building a network of people in policy who are interested in longtermism, so that maybe just in a 30 year time period or something like that, these ideas are much more represented among people who might be making the decisions that are relevant to these things.

Ben Todd: Yeah. Then the third category within boring longtermism is what gets called ‘broad interventions’. So these are just things to work on that will help society navigate lots of different future challenges, whether that’s future existential risks or other types of hinges. So the problem we’ve written about online in this category before is improving institutional decision making, where the idea is like many of these challenges will be navigated by big institutions in society and if you can help them be better at making complex decisions then we’ve got a better chance of overcoming these challenges, whatever they turn out to be.

Ben Todd: But yeah, this is also our greatest area of uncertainty about which problems we might recommend. In particular, a lot of people interested in boring longtermism think there’s going to be some other promising areas vaguely in the area of how to improve international coordination, how to improve international governance, how to reduce the risk of great power conflict. The basic idea there is just that a lot of these big global challenges really seem to boil down to coordination problems between major powers, and if you can just get those countries somehow working together better on issues like climate change but also like all the other issues we talk about, then you might make the world generally more robust to these challenges.

Ben Todd: So, yeah, I’ve been talking about the two camps: boring longtermism and targeted existential risk focused longtermism. I just want to clarify that this is actually a spectrum of options and everyone kind of agrees that both types of activities are important. The interesting question is more about exactly where the balance should be within the community between these two different things.

Robert Wiblin: And I suppose actually also what portfolio we want. Because presumably we don’t want to go all in on either one of these strategies. There are good reasons to have a bit of a range of different approaches both to hedge our bets, and also so we can learn about how well these different options are playing out.

Ben Todd: Well yeah, I mean I wouldn’t actually say it’s about hedging bets because that makes it sound like it’s about risk aversion. But each approach has some diminishing returns. So you want to do a little bit of each. And then as you said, it’s also about the exploration point where we’re uncertain about how good each approach is. So trying them out lets us learn that and then we can maybe reevaluate which ones to focus on later.

Robert Wiblin: You’ve done a really good job there mapping out the spectrum from people who think that we live in a special moment where you might be able to have an outsized influence, and we kind of know what the nature of the hinge is and how we might be able to influence it, to the other side, people who think the current time is unremarkable; there’ll be important moments in history when we’re going to affect the direction that things go, but they’ll be at some point in the future and we don’t know what they’ll look like so instead we have to kind of prepare for them in some general way or some much more flexible way.

Robert Wiblin: My impression is that you’ve become quite a bit more excited about the second category, the more boring longtermism lately. I’m a little bit more skeptical of that category and maybe think we don’t want to jump too quickly towards all those things. Because some of the challenges I think they face are, let’s say you’re trying to do a broad intervention where you improve decision making in the world as a whole. That’s difficult, even within one organization. There’s a million organizations in the world and it’s very hard to know that if you want to improve decision making, like who? Among where? You could end up having a lot of influence, but if it isn’t particularly the right people or the right places for what turns out to be the pivotal or the critical juncture moment in future, then it can all be wasted and that can potentially result in you dissipating your energy a lot.

Ben Todd: Yeah, I think I’d agree with that. I think within the second camp, the stuff I’m most excited about is more targeted community building work and global priorities research, so we actually recommend them on a slight tier above. And then things like improving institutional decision making in general, I’m a little bit less convinced that that’s the highest priority at the margin, though obviously, as always, if you had a particularly strong comparative advantage in that area, then I think it would be interesting.

Robert Wiblin: Yeah. I mean there are some exceptions to that where it does seem like there’s groups that we don’t know exactly how they’ll end up being pivotal, but there’s a good chance that they will be, like relations between the US and China. It’s pretty obvious that that going badly could really send humanity off track, so even if we don’t know exactly how they’re going to end up in conflict or how that could be avoided, building capacity to try to do that in the future, it seems like it’s a pretty decent bet.

Ben Todd: Yeah. Those broad interventions are our area of biggest uncertainty. We don’t really have strong, concrete suggestions right now, but it’s an area I’d most like to see more research done in to try to uncover maybe a more niche thing in there that turns out to be really good.

Robert Wiblin: It does seem like quite a few people are getting interested in that and starting to try to explore that space, so I’m hopeful that we’ll get a bit more clarity on it over the next couple of years. Especially, I guess people going into global priorities research might be able to bring some attention to it.

Ben Todd: I think partly we need more research into the area. I mean it’s also interesting to consider that it may be really helpful just to have people exploring these paths and trying to figure out good things to do within them. It’s quite a bold career move in a way to just try to explore a new problem area by yourself. But obviously if you succeed then you might uncover a thing that many other people in the community could go and do and have a big multiplier on your impact.

Robert Wiblin: A really difficult question here that I would love to know the answer to is how big are the differences in effectiveness of having an extra person or an extra million dollars going towards these different problem areas?

Ben Todd: Yeah. One problem rather than another.

Robert Wiblin: Yeah. As you were saying, I think people’s typical intuition is that working on the right problem might make you two or three times more effective than another one.

Ben Todd: Well, I would say the typical advice is just there’s no way to compare between different issues in the world. Just do what you’re most interested in, do what you’re most passionate about, and if you’re already passionate about recycling, work on that. If you’re really passionate about positively shaping the development of AI, then do that one, as many people are.

Robert Wiblin: Or a mix. Yeah. So maybe the typical advice is just don’t even consider it, but then I suppose there’s other people would think about it who might say, “Well, the differences are modest”. So I guess maybe they think these personal fit issues are more important.

Ben Todd: Exactly. And that’s why then following your interests, following your passion comes in. Because that’s, in theory, a proxy for personal fit.

Robert Wiblin: Yeah. Whereas I think we’re kind of trying to figure out if it is a 10X difference or a 100X difference, a 1000X difference, a 10,000X difference.

Ben Todd: I mean we’re pretty much totally the opposite to the common advice in a way, because we actually think probably your choice of issue is the most important question to get right.

Robert Wiblin: Yeah. And it’s a bit frustrating because it’s hard to prove this because we don’t really get the measurements of how much good did groups do, especially in this longtermist stuff where we actually won’t see whether that … well, we’ll probably never know even if we could see the fullness of history because we won’t know the counterfactual or we won’t know whether these things helped. But especially from where we stand today where they’re trying to have influence in the very long term, it’s impossible to really say with confidence the effect of any particular person or project right now. We kind of just have to have a model of the world and try to guess how much impact they’re having, or use broader theoretical considerations and try to go on that.

Robert Wiblin: But one reason that I think there probably are really massive differences in effectiveness between different problems is that we use this importance, tractability and neglectedness framework. We break down how effective it is to work on a problem into how big is the scale of the problem, how big would the benefit be if you could fix some fraction of it?

Ben Todd: Yeah. Though then for us, that actually boils down to just ‘how much does it help the long-term future’?

Robert Wiblin: Yeah, how much has it increased the expected value of a long-term future? Then, how many resources are going into working on this problem, where I think we have a pretty strong intuition that on average you get logarithmically declining returns to working on a problem. At least, once you’re talking about serious money or serious numbers that people going into working on them. So, that means each doubling of resources is probably equally useful. This is because you kind of run out of useful ways to solve a problem over time.

Ben Todd: And yeah, there’s quite a bit of empirical evidence that in many areas, you get these logarithmic returns. For instance I think with Moore’s law, they’ve actually roughly measured that the resources going into that type of research has kind of doubled consistently in order to maintain the constant growth rate.

Robert Wiblin: Yeah. Actually across scientific research in general, it seems like we’ve been just piling more and more researchers onto scientific questions and getting less and less return, probably because the questions are getting progressively more difficult.

Robert Wiblin: So then we’ve got the kind of third factor, tractability or solvability, where I guess we probably think that the differences there are somewhat smaller, maybe than other people think, or at least somewhat smaller than the other two factors. But it’s very hard to kind of measure that. So we’re perhaps a little bit unsure about it. But anyway, if you just naively plug in numbers for these things across lots of different problems, it seems like you get ludicrous differences. Like 10,000X differences between the problems that we recommend and the problems that many people work on.

Ben Todd: Yeah, maybe it’s just worth kind of working that through just very quickly and that taking this longtermist perspective with many issues, we’re just not really sure what effects they have on the long-term future. Whereas some of the things we’ve identified, such as if you can reduce an existential risk, that seems like it has a massive benefit for the long-term future. Although it’s hard to quantify, there seems to be some really big difference in scale there.

Ben Todd: And then when you look at neglectedness, again, it’s very hard to measure neglectedness precisely. One kind of guide is just looking at the amount of money flowing into different areas. So just to kind of look at that, US education has around $1 trillion a year spent on it, to an order of magnitude. Climate change has a couple of hundred billion dollars spent on it internationally. It seems that climate change is something like 10 times more neglected than US education as an area just measured with the total amount of resources going into it, but then issues like global catastrophic biorisks and AI safety research, they only have tens of millions of dollars per year spent on them. So I think that means then you’re looking at about a factor of 10,000 difference in the resources, at least the kind of direct resources going into them right now in the short term.

Robert Wiblin: And then I guess some people would want to say, “Well some of those things are so much harder to solve than others. It might be 10,000 times harder to solve”. But I suppose I don’t think that, and I think watching people try to go and pioneer these areas has vindicated that view that these things weren’t as hard to solve as people imagined because they do just seem to be getting traction as you would in kind of solving any other practical problem.

Ben Todd: Yeah, exactly. It doesn’t seem like it’s 10,000 times harder to make progress on AI safety than on climate change. In fact, actually it seems if anything, it’s easier because the field is so much smaller, which is exactly what you would expect from neglectedness.

Robert Wiblin: So if you just plug in these numbers naively, you get these very large seeming multiples, but there’s a whole bunch of factors which I guess kind of attenuate that conclusion or should make us doubt it. One is just that it’s a very strong and surprising conclusion on its face. So it kind of that extraordinary claims require extraordinary evidence, and here we’ve kind of offered theoretical considerations, these really broad measurements, so it would make sense to be reluctant to draw such a strong conclusion from evidence that doesn’t seem super compelling.

Ben Todd: There is a debate about whether we should even find it surprising, which is the issue of how strong should our priors be about how big these differences should be. And in one school of thought, it’s kind of just if you think there’s a factor of a hundred thousand difference, that means you think, say, one person working on the most effective area has the impact of 100,000 people working on more mainstream problem areas. And you may just think that seems implausible, that one person can easily have such an outsized impact on the world, so we should be skeptical of it.

Ben Todd: But then there’s another side of things where it’s just, well, very little effort is going into helping the long-term future because of our political system. General people’s interests are all focused on very short timescales. So we might just think that society’s not doing a very good job at helping improve the long-term future, and that would just mean that there might be amazing opportunities lying around that just aren’t being taken. So for someone who then really does care about helping the long-term future, they’re then able to have way more impact than normal from a longtermist point of view.

Ben Todd: This is called the issue of market efficiency for doing good; is it an efficient market, in which case it’s hard to have outsized impact, or is it just wildly inefficient in which case we shouldn’t be surprised that we find really big differences.

Robert Wiblin: Yeah, I guess if you just look around and see that nobody is trying to improve the long-term future in the way that we’re conceiving of it and then it does seem a lot less surprising that you get a 10,000 fold difference in the effectiveness of things that people are doing, because there aren’t people crowding into the areas where they could have a large impact because no one’s even thinking about it. But I guess there’s other reasons to be a little bit more skeptical. One is that you might get flow over benefits. So someone who’s working on a project that’s not the most important one in our view it might have kind of side effects that benefit the things that we think are most important.

Robert Wiblin: So for example, if a hundred people go and work on global poverty, they might kind of solve that problem to some extent, and this will cause someone else to decide not to work on that problem because it seems like it’s more handled and then go to work on a different issue.

Ben Todd: Once you get to this point where there are people who might work on either issue and they’re kind of unsure about which ones to do, then it seems like it starts to get quite hard to get massive differences if they’re influenced by everyone else. Maybe this happens on quite a large scale. Like Bill Gates has said he thinks AI safety is an important issue but he’s not working on it himself because he’s already chosen global health as his focus and he’s going to stick with that. But maybe if no one was doing anything on AI safety at all, Gates would switch into it.

Robert Wiblin: Yeah. Or alternatively you can imagine, because Gates is doing the global poverty stuff, that’s going to prompt some other billionaire to go and work on global catastrophic risks instead.

Ben Todd: Exactly. Yeah. And even if that just happens to a tiny, tiny degree, like 1 part in 1000, of the resources funge like that, then it kind of caps the difference in effectiveness between the two things at a factor of a thousand.

Robert Wiblin: Yeah. So funging is this slightly strange term of art that people use in effective altruism to refer to spillover, flow over effects when an action that you take changes the behavior of other people. A classic case would be if I donate money to a charity and then that uses up their room for funding, then another donor who would have supported them instead doesn’t give to that organization but gives to a different one. And in a sense, what my donation has actually caused is the funding of this second organization that this other donor then gave to as a result of my donation. So that can sometimes be a little bit hard to control: the effect of your actions because of how it changes other people’s actions, and you get the same kind of phenomenon potentially where if you take a job then it’s possible that someone else would have taken that job if you hadn’t taken it. So in fact what you’re causing is the spillover of the job that they go and take instead.

Ben Todd: Yeah. And we would say that that second person has funged you and the job that you took. So you can imagine the donation case. There’s two donors supporting a charity. The charity has a budget of $100. Each donor gives $50. If you were to donate even more, say you would donate $60 instead, but then have the other donor just reduce their donation. So the overall charity was still at $100, then that donor has funged your donation because they’ve kind of made it that you have less impact on the charity.

Robert Wiblin: Yeah, it comes from the term of fungibility, which describes objects that are all completely replaceable with one another. Just as a $1 bill is the same as any other dollar bill. But again, anyway, we’ll move on from that.

Ben Todd: Yeah, I guess there’s an Open Phil post where they talk about some of the challenges that GiveWell have faced with this.

Robert Wiblin: Another thing is even than resources funging, you can also have side effects in a project that have a positive effect on the long-term future even if it wasn’t designed for that. So you might think, say, working on US education in general is not terribly effective because lots of people are trying to do it and it seems relatively difficult to fix the problems that are remaining. But even if it’s relatively ineffective, improving, in general, wisdom among students making people smarter and more informed probably has some diffuse positive effects on the long-term future, which would kind of set, again, if you think it’s 1/100th as effective on the long-term future by that measure, then that sets an upper bound on the ratio of the effectiveness of working on that versus something that’s more targeted.

Ben Todd: Yeah, I mean maybe this is a bit of a digression, but lots of things potentially help improve the long-term future in a way it actually makes the scope of everyday actions much larger. So just having a kid, maybe that just increases the population, in expectation, by a slight fraction for a really long time. So you’re actually having this big positive impact. Though, that one would be more a speed up in character rather than a trajectory change, so probably is dominated by quite a large degree by reduction in existential risk. Whereas something like education that maybe does actually also reduce existential risk a bit.

Ben Todd: Though again, it wouldn’t be too surprising to me if it was a factor of 10,000 less levered because it’s just so undirected at the particular hinges that we think are most important. Yeah, most things probably do help the long-term future but you might still have very large differences between them, I think, on that argument.

Robert Wiblin: I mean, that’s a somewhat open question: “Do these actions generally help?” So you can imagine some people think that having more children is bad for the long-term future because it creates more problems in the short term before we can fix them. So even if it’s positive, because there’s more people in the short run, if it has a very, very slight effect on the probability of extinction or catastrophe, then that would end up swamping that.

Ben Todd: Yeah, exactly. I mean, I think you only need one kind of funging argument like the one we mentioned to already think that there’s a cap in the difference in potential effectiveness.

Robert Wiblin: We can also throw in empirical and moral uncertainty about these things, and it’s very hard to know exactly how to quantify that.

Ben Todd: Exactly, and in theory that should be taken into account in your assessment of scale. But then it’s very hard to do all these things in practice, so you probably want to correct for them later.

Robert Wiblin: Another thing is that it’s exceedingly hard to know how actually neglected problems are. So sometimes we say, “Oh, well only $10 million is being spent on this. So yeah, only $10 million has been spent on it so far.” But that can be very misleading for two reasons. One is that in the past, lots of money has been spent solving problems in general and very often some of that subtly indirectly works on this other problem.

Robert Wiblin: So, for example, not very much money specifically works on preventing aging. There’s relatively few labs that would call themselves anti-aging labs. But then everyone who’s been working on biomedical research in general, they’re going to make improvements in chemistry. Improving our understanding of biology has, to some extent, indirectly been helping us, bringing us closer to the point where we can tackle that problem directly, and it’s very hard to know how to quantify all that.

Robert Wiblin: So that’s in the past. Then in the future it also seems issues that are very pressing, really, really burning issues where the scale is really big and no one’s working on it, there’s a tendency, at least at the moment, or it seems there’s some tendency for resources to be directed towards those. So if it’s particularly neglected now, it may not be so neglected in the future. Then if you consider, on net, across all of time, is it neglected?

Ben Todd: Well, yeah. Well I guess what you actually care about is how many resources get spent on a hinge before that hinge happens.

Robert Wiblin: Right yeah, before that hinge, and so we don’t know when the hinge exactly will be.

Ben Todd: So then you’re trying to predict what resources are going to be spent on that thing in the future as well. So yeah, this all means that just those naive numbers that I said earlier, that’s probably not the correct way of thinking about it. Probably the differences are less severe than the kind of very direct numbers that I mentioned. But then they were such large differences in neglectedness on the naive measure, say a factor of 10,000, and still expect that to be large differences even once we take into account these attenuating factors. I think there’s also just a very strong argument that we should expect helping future generations to be neglected, as I mentioned earlier.

Robert Wiblin: Because I guess the market doesn’t really reward it very well. Voters don’t really reward it for the government, and we can just directly observe that not that many people are thinking about it.

Ben Todd: There’s an old Paul Christiano blog post on our website where he talks about, if most people only care about a thing 1% as much as you care about it, you should expect to have roughly a hundred times as much impact by focusing on it.

Robert Wiblin: Yeah, I guess another thing is that we’re often focused on kind of tail risks and unlikely events, and for those, well it seems like humans are very bad at thinking about those because they weren’t so important in the environment in which we evolved, and it’s just also just very difficult for any mind to really wrap their heads around.

Ben Todd: Yeah. Though I’ve never been quite convinced by the heuristics and biases arguments for why X-risks should be neglected, because we know in other cases people like–

Robert Wiblin:: Like terrorism.

Ben Todd: … way overreact to small probabilities of bad things happening. So I think you have to make a more detailed argument about the nature of the risks.

Robert Wiblin:: Yeah, I think that’s right. I think it’s just the case that with unlikely things, we’re just all over the place. It’s not obvious that we spend too much or too little, it’s just that we–

Ben Todd: Sometimes we spend way too much, sometimes way too little.

Robert Wiblin:: Exactly. Yeah.

Ben Todd: So just an argument against efficiency.

Robert Wiblin:: Yeah. And so it’s an argument that if you thought about it a lot, and really tried to estimate the likelihood of these things concretely, then maybe you could get a pretty big edge there.

Ben Todd: Yeah, that makes sense. So then the question is, “What is our overall thoughts on how big the differences in scale are”? We tried to survey different leaders of organizations in the effective altruism community last year, and we asked them just at the current margin, if you had $1 to donate, how would you trade off between donations to the EA Long Term Future Fund and the EA Global Health Fund? And the median response to that was people thought the long term fund was about a factor of 20X more effective, though with huge differences in opinions around them. An order of magnitude at each side of that. Yeah. And so then that suggests that they thought on average, was about a factor of 20 difference. I should say that those groups of people were probably selected for being more interested in longtermism. So that maybe has a bias in favor of the longtermist direction.

Ben Todd: But then yeah, my guess is that many… We didn’t actually ask this question, but I would guess that, say, if you just picked a random median US focused charity, compared it to the EA Global Health Fund, people would think that the Global Health Fund was about a factor of a hundred times more effective, which is kind of based on the beneficiaries being a hundred times poorer. Also, I think GiveWell now says that they think their top charities tend to have a five or 10x multiplier compared to just giving the cash, and so that maybe gets you over to a factor of a hundred difference. And so then if we put those two super speculative figures together, you’d roughly get to a factor of 2,000 difference between a median US charity and the EA long term fund, and that would be a rough estimate of the total spread.

Ben Todd: Though, as I say, that’s very controversial. And that’s maybe one of the biggest uncertainties within our ranking, is just how does say, nuclear security compare to global catastrophic biorisks? Is it that actually there’s only a slight difference between them, or maybe is biorisk actually 10x or even 100x more effective?

Ben Todd: Either way. Yeah. Even if there was only a factor of 100 difference in effectiveness between issues, that would be an absolutely crucial consideration for choosing a career, because again, it would be like in one year, you could achieve as much as a hundred years of work in this other area, at least if you kind of treat all those years as equally effective. And so it’s really, really important to take that seriously.

Ben Todd: As we’ll come onto, your personal fit is also really important and can vary a lot between different areas, and so sometimes personal fit can outweigh the differences in different areas, and it could be better to kind of do an area that you think is a bit less important in general, at the margin, but where you have a really unusual personal fit with it. But I think we also see people who kind of intellectually agree that maybe there are these large differences, but then when it comes to actually making decisions, they kind of treat all these areas as roughly similar, which is how our intuitions work. Our intuitions do not tackle these massive differences in effectiveness very well at all. And so you really have to try to resist that tendency, and it can help to try and put specific numbers on things, and try and think it through.

Robert Wiblin:: All right. Is there anything that we haven’t covered already, before you move on from this issue of what are the world’s most pressing problems?

Ben Todd: Yeah, so we’ve covered our view of which problems are most pressing. And we’ve been really explaining how that’s driven by longtermism, and the other philosophical positions we mentioned earlier. But I would just say that I think actually many people with slightly different focus areas could get behind many of these issues, even if they’re not, say, as focused on longtermism as us.

Ben Todd: I think a wide variety of moral positions can agree that nuclear war would be very bad. And also, actually, nuclear security seems pretty neglected still compared to many more conventional issues. And so we talk about climate change way more than we talk about the risk of nuclear war, even though we could literally all go up in flames at any moment. It’s actually a very big problem. Yeah. So you don’t need to be into this very full on longtermism to think maybe that could be one of the most pressing problems to work on.

Ben Todd: Likewise, the whole idea of global priorities research and building the effective altruism community is that we are very unsure about which problem is most pressing. And so there’s room for people with all kinds of different values and backgrounds to work together on that project of figuring out which problems are most pressing, building a community that just wants to work on whichever issues turn out to be most important, based on the arguments.

Robert Wiblin:: Yeah. Every so often we kind of do a back of the envelope calculation of how much would it cost to save a life by trying to prevent nuclear war, or doing other things to reduce catastrophic risks, that is of the present, of people alive now. And it seems it’s kind of competitive with other opportunities to save lives through really cheap healthcare and things like that. But it’s very, very speculative, obviously.

Ben Todd: Yeah. Probably depends a lot on the specific existential risk as well.

Robert Wiblin:: Yeah. Like whether you can get a lot of leverage there.

Which careers effectively contribute to solving these problems? [01:11:10]

Robert Wiblin:: All right, let’s push on. So we’ve talked about what problems we might suggest that people work on solving. The next step might be to think about what are the methods that you can use to get a lot of leverage in the world, to have a lot of influence, in order to solve a large fraction of these problems. Yeah. What do we talk about in that section?

Ben Todd: Yeah, so the next section is now getting into concretely which careers could you actually take to help with these issues? Supposing you roughly share our view of global priorities. You at least, you’d want to consider working on some of these issues. Then what might you actually do to to help with them?

Ben Todd: And I should just kind of say as a way of caveat, people really focus a lot on our particular ranking of things, but these are really just ways to help you brainstorm things that might be high impact. Then you need to then run it through our full decision process. Consider your individual circumstances, your personal fit, other options that might be really promising but aren’t on our list, and to actually come to your individual decision. We’re not providing a kind of, all considered, ‘these are the best careers for everyone’ list. That’s unfortunately not possible.

Robert Wiblin:: Yeah. You’re all too different. If only you were all identical widgets, then we could do that, but sadly.

Ben Todd: So yeah, these are just ways to brainstorm ideas and so we start with what we call our five categories. These are kind of broad, very broad categories that can help you generate options. I’m not going to go through them all in detail, but kind of add a few, I think, more mistakes people make when they’re thinking about these areas.

Ben Todd: So the first one is just trying to find a really good nonprofit that’s addressing one of these problems, and working at that. Ideally, a nonprofit that is a bit talent constrained as well, rather than just only very heavily funding constrained. And yeah, we list a lot of… We’re trying to list more and more organizations on our job board.

Ben Todd: I think one thing here is that it’s very easy to focus on what gets called the ‘effective altruism organizations’, which are the ones that have an official EA stamp on them, but actually there’s many, many nonprofits addressing some of these issues, and they can all be potentially good to work at as well, so we’d encourage people to think a bit broader about options within that category. And we’re trying to make the list of jobs on the jobs board broader, and we’re now up to almost 300 listed positions, and we’re pushing to get that wider still in the coming months.

Ben Todd: The second one is trying to do some kind of relevant graduate study, and then go into some work on a research question that’s relevant to one of the problems we’ve covered. Going down the grad school route is good if you might have good fit for it, though you’ll want to be quite careful about which area you choose to go into. And so some of the areas that we think are the most generally exciting are machine learning and economics, because they’re both… Economics is very relevant to global priorities research, as well as other areas. Machine learning is obviously very relevant to AI safety.

Ben Todd: But then another nice thing about graduate studies in those areas is that they also give you a lot of general backup options. You’re not just punting your career on this one path. But then yeah, lots of other areas that are worth considering: bioengineering, synthetic biology, all the kind of stuff in that area, because of its focus with biorisks. And then there’s all kinds of policy relevant issues that can be very useful for policy careers. There’s security studies, even law, and then go into policy, international relations, again, economics, but more with that focus. Yeah, war studies it gets called at King’s in London. Yeah, master’s of public policy. And so you can do all this research that’s relevant to either the broad longtermist interventions, or to take a policy approach to AI safety, or to biorisk. So yeah, those are some of the areas that we’ve been most excited about recently within research.

Ben Todd: The third one then maybe is the one I want to highlight the most, which is government and policy careers in roles that are relevant to these different problems we’ve covered. And I think this is the area that maybe is still, even though we’ve been banging on about it for awhile, the most neglected by our readers, and one which I think really a lot more people could seriously consider maybe making applications to than currently do. It also seems like one of the biggest needs in the community of people addressing these issues, and where there’s a lot of roles that might be very high impact.

Ben Todd: I think partly one problem that stops more people going into these areas is that it seems very vague on what to do in this area. But one thing is just, in order to get started, there’s actually a couple of very concrete ways to get started. One is think tank internships and research positions, and we’ve got a profile about those.

Ben Todd: In the US, you’ve got joining a political campaign, and you can often just do that pretty much straight from undergraduate. There’s trying to work as a congressional staffer, and then there’s doing graduate study in a relevant area. In the US as well, it’s pretty common to do a master’s of public policy and then go into policy from that. Also do a law degree. And so I call these the kind of entry policy positions, and they are just things that kind of generally establish you, get you a network, and then you can go from them into many different positions. Oh, and the other one is you can just try to get directly into the executive branch, or the White House, and just get positions in those straight away. Sometimes you can kind of leap ahead more quickly through these accelerator programs. So those are kind of how you’d establish yourself.

Ben Todd: Another big misconception we come across is people think you have to be a real people person in order to work in policy. But actually, there’s just a really wide range of roles in this area, and there’s definitely a lot of roles for more research oriented people, who aren’t just people with really good social skills.

Robert Wiblin:: It’s not all kissing babies. Some of them are just shuffling paper.

Ben Todd: There are obviously lots of roles where it is really important to have good social skills, but you don’t necessarily have to.

Ben Todd: Another big thing is people don’t realize that you can just enter a lot of these roles directly from undergraduate. You can just try to make a round of applications initially, and then if that doesn’t work, then you can go to graduate school, or gain some other type of career capital and try again later. But then yeah, you don’t need to have a policy degree. We sometimes find people who have already got a PhD, but then they think they need to do a master’s of public policy in order to work in policy. But actually, you’ve got a PhD, you can just go straight into policy positions. In fact you don’t even need the PhD.

Ben Todd: And so yeah, I think maybe one thing we were speculating which is going on here is just after you’ve just done six or seven years doing this very narrow PhD thing, it’s very hard to imagine that someone would just let you loose and set policy, and figuring that out, but that is actually how the world goes sometimes.

Robert Wiblin:: It is odd, isn’t it?

Ben Todd: One other caveat about policy is it is probably easier than average to accidentally make things worse in policy. And we’re going to come onto doing accidental harm later in the episode.

Ben Todd: Okay, so fourth path?

Robert Wiblin:: Yeah. So then we’ve got direct work, research, policy and government.

Ben Todd: Yeah, yeah.

Robert Wiblin:: Fourth one.

Ben Todd: The fourth one is just if you already have a strong skill, then obviously consider how you might be able to apply that to one of the most pressing problems. And that’s just a bit of a catchall. Our categories are most relevant to people who are early in their career, and they have a lot of flexibility over what they end up doing. If you already have some very well developed skill, it’s harder for us to give general advice, but there may well be a great way to apply that to one of these issues. And so if you’re in that category, unfortunately you’re going to have to do a bit more work to go and meet people in the problem area that you’re focused on, and try and ask them what you could do with the skills that you have.

Robert Wiblin:: I think during the Ebola crisis it turned out that anthropologists were really important because they understood burial practices and how you might be able to shift burial practices to reduce the transmission of Ebola. It’s just pretty random, and you wouldn’t necessarily think anthropology is going to be the top thing to study.

Ben Todd:: To help with biorisk, yeah.

Robert Wiblin:: Yeah. To help with biorisk. But as it turned out, sometimes just having an expert in it. Maybe you want to have a few around in all kinds of different areas, because you don’t know what the future holds.

Ben Todd: Yeah. That’s a nice example of how a wide variety of skills can be applied to these pressing problems. And it’s also an illustration of how people interested in longtermism, and just effective altruism more broadly, we really need there to be people who share that kind of approach, but also really understand all the different academic disciplines that ultimately might be applicable. I mean, maybe medieval poetry or something, it’s a bit harder to see how that’s going to turn out to be the crucial thing, but —

Robert Wiblin:: Yeah, the future’s unpredictable, but not quite that unpredictable.

Ben Todd: Yeah, but then there’s still a very wide range of things that are needed. And one thing that has, I think, been on some people’s mind at the Global Priorities Institute is that there’s very few people who have a strong history background who are interested in effective altruism. But actually when we try and study longtermism, often you really want to be looking at historical analogs. There’s questions like, “Could the Industrial Revolution have gone differently”? There’s questions about, “Has welfare gone up or down over time”? And these are basically questions in history that are really interesting and useful, but we haven’t found anyone to study them.

Ben Todd: And so that’s just one example of an area that’s not normally considered a classic thing that we want lots of people to do. But it’d be really great if there were some people doing that path.

Robert Wiblin:: Yeah. So what about earning to give?

Ben Todd: Yes. So then we have that as the fifth category. And so I like to think of earning to give as kind of deliberately earning more than you would have, and then using that to donate more. So you don’t necessarily have to be in a super high earning career, anyone can earn to give to some degree by choosing a higher earning option, and donating it. And yeah, the key thing here is just money is always useful to help the most pressing problems.

Ben Todd: Even if you’re not sure how to use money right now, you can always save money and grow it and have more later. And then when opportunities do arise, you can use it. But actually, I think there are just great places for donors to give right now. If you’re not sure about doing your own research, you can donate to the EA Longterm Fund, or one of the other funds. And so yeah, I mean, I think earning to give is still an interesting option, because it’s a way for anyone to convert their skills into resources that are working on the world’s most pressing problems. Yeah. If you’re going to follow that option, then I’d say also try to do it in a path that gets you good career capital, so that you do also have the option of switching later.

Ben Todd: Yeah. One of the reasons why we highlight it a bit less than we did in the past is that over the last say five years, there has been a lot more money going into some of these longtermist focus areas, especially from the Open Philanthropy Project. And that has created a bit of a kind of overhang, where it seems like a really key bottleneck is basically people able to do research, entrepreneurship, policy, within these pressing areas, and slightly less so, money. Though again, having said that, there are still uses for additional money. For instance, the Open Philanthropy Project often only wants to fund about half of the budget of an organization, because they don’t want that org to have to only depend on them. And so that means that you at least