Updated Sept. 12, 2014 to change “GiveWell Labs” to “Open Philanthropy Project,” in line with our August 2014 announcement.Throughout the post, “we” refers to GiveWell and Good Ventures, who work as partners on the Open Philanthropy Project.

This post draws substantially on our recent updates on our investigation of policy-oriented philanthropy, including using much of the same language.

As part of our work on the Open Philanthropy Project, we’ve been exploring the possibility of getting involved in efforts to ameliorate potential global catastrophic risks (GCRs), by which we mean risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction). Examples of such risks could include a large asteroid striking earth, worse-than-expected consequences of climate change, or a threat from a novel technology, such as an engineered pathogen.

In our annual plan for 2014, we set a stretch goal of making substantial commitments to causes within global catastrophic risks by the end of this calendar year. We are still hoping to decide whether to make commitments in this area, and if so which causes to commit to, on that schedule. At this point, we’ve done at least some investigation of most of what we perceive as the best candidates for more philanthropic involvement in this category, and we think it is a good time to start laying out how we’re likely to choose between them (though we have a fair amount of investigative work still to do). This post lays out our current thinking on the GCRs we find most worth working on for the Open Philanthropy Project.

Why global catastrophic risks?

We believe that there are a couple features of global catastrophic risks that make them a conceptually good fit for a global humanitarian philanthropist to focus on. These map reasonably well to two of our criteria for choosing causes, though GCRs generally seem to perform relatively poorly on the third:

Importance . By definition, if a global catastrophe were to occur, the impact would be devastating. However, most natural GCRs appear to be quite unlikely, making the annual expected mortality from natural GCRs low (e.g., perhaps in the hundreds or thousands; more on the distinction between natural and anthropogenic GCRs below). The potential importance of GCRs comes both from novel technological threats, which could be much more likely to cause devastating impacts, and from considering the very long-term impacts of a low-probability catastrophe: depending on the moral weight one assigns to potential future generations, the expected harm of (even very unlikely) GCRs may be quite high relative to other problems.

. By definition, if a global catastrophe were to occur, the impact would be devastating. However, most natural GCRs appear to be quite unlikely, making the annual expected mortality from natural GCRs low (e.g., perhaps in the hundreds or thousands; more on the distinction between natural and anthropogenic GCRs below). The potential importance of GCRs comes both from novel technological threats, which could be much more likely to cause devastating impacts, and from considering the very long-term impacts of a low-probability catastrophe: depending on the moral weight one assigns to potential future generations, the expected harm of (even very unlikely) GCRs may be quite high relative to other problems. Crowdedness . Because GCRs are generally perceived to have a very low probability, many other social agents that are normally devoted to protecting against risks (e.g. insurance companies, governments in wealthy countries) appear not to pay them much attention. This should not necessarily be surprising, since much of the benefits of averting GCRs seem to accrue to future generations, which cannot hold contemporary institutions accountable, and to the extent they accrue to present generations, they are distributed very widely, with no clear concentrated constituency that has an incentive to prioritize them. The possibility that a long time horizon may be required to justify investment in averting GCRs also seems to make them a good conceptual fit for philanthropy, which, as GiveWell board member Rob Reich has argued, is unusually institutionally suited to long time horizons. This makes it all the more notable that, with the key exception of climate change, most potential global catastrophic risks seem to receive little or no philanthropic attention (though some receive very significant government support). The overall lack of social attention to GCRs is not dispositive, but it suggests that if GCRs are genuinely worthy of concern, a new philanthropist aiming to address them may encounter some low-hanging fruit.

. Because GCRs are generally perceived to have a very low probability, many other social agents that are normally devoted to protecting against risks (e.g. insurance companies, governments in wealthy countries) appear not to pay them much attention. This should not necessarily be surprising, since much of the benefits of averting GCRs seem to accrue to future generations, which cannot hold contemporary institutions accountable, and to the extent they accrue to present generations, they are distributed very widely, with no clear concentrated constituency that has an incentive to prioritize them. The possibility that a long time horizon may be required to justify investment in averting GCRs also seems to make them a good conceptual fit for philanthropy, which, as GiveWell board member Rob Reich has argued, is unusually institutionally suited to long time horizons. This makes it all the more notable that, with the key exception of climate change, most potential global catastrophic risks seem to receive little or no philanthropic attention (though some receive very significant government support). The overall lack of social attention to GCRs is not dispositive, but it suggests that if GCRs are genuinely worthy of concern, a new philanthropist aiming to address them may encounter some low-hanging fruit. Tractability. The very low frequencies of GCRs suggest that tractability is likely to be a challenge. Humanity has little experience dealing with such threats, and it may be important to get them right the first time, which seems likely to be difficult. A philanthropist would likely struggle to know whether they were making a difference in reducing risks.

Our tentative conclusion on GCRs as a whole is that the balance of strong performance on the importance and crowdedness criteria outweighs low expected tractability, but we are open to revising that view on the basis of deeper explorations of particularly promising-seeming GCRs.

What we’ve done to investigate GCRs

We have published shallow investigations on both GCRs in general and a variety of specific (potential) GCRs:

We also have an investigation forthcoming on potential risks from artificial intelligence, and we commissioned former GiveWell employee Nick Beckstead to do a shallow investigation of efforts to improve disaster shelters to increase the likelihood of recovery from a global catastrophe. We are still hoping to conduct shallow investigations of nanotechnology, synthetic biology governance (aimed more at ecological threats than biosecurity), and the field of emerging technology governance, though we may not do so before prioritizing causes within GCRs.

Beyond the shallow level, we have done a deeper investigation on geoengineering research and continued our investigation of biosecurity through a number of additional conversations.

Our investigations have been far from comprehensive; we’ve prioritized causes we’ve had some reason to think were particularly promising, often because we suspected a relative lack of interest from other philanthropists relative to the causes’ humanitarian importance or because we encountered a specific idea from someone in our network.

We have also made attempts to have conversations with people who think broadly and comparatively about global catastrophic risks. As far as we can tell, most such people tend to be connected to the effective altruist community (to which we have strong ties and which tends to take a strong interest in GCRs). Many of our conversations with such people have been informal, but public notes are available from our conversations with Carl Shulman, a research associate at the Future of Humanity Institute, and Seth Baum, executive director of the Global Catastrophic Risk Institute.

General patterns in what we find promising

The following two general observations are major inputs into our thinking:

“Natural” GCRs appear to be less harmful in expectation.



After a number of shallow investigations, we’ve tentatively concluded that “natural” (i.e. not human-caused) GCRs seem to present smaller threats than “anthropogenic” (i.e. human-caused) GCRs. The specific examples we’ve examined and a general argument point the same direction.

The general argument for being more worried about anthropogenic GCRs is as follows. The human species is fairly old (Homo sapiens sapiens is believed to have evolved several hundred thousand years ago), giving us a priori reason to believe that we do not face high background extinction risk: if we had a random 10% chance of going extinct every 10,000 years, we would have been unlikely to have survived this long (0.9^30 = ~4%). Note that anthropic bias can make this kind of reasoning suspect, but this reasoning also seems to map well to available data about different potential GCRs, as discussed below (i.e., we do not observe natural risks that appear likely to cause human extinction). By contrast with “natural” risks, anthropogenic risks present us with potentially unprecedented situations, for which history cannot serve as much of a guide. Atomic weapons and biotechnology are only decades old, and some of the most dangerous technologies may be those that don’t yet exist. With that said, some “natural” risks could present us with somewhat unprecedented situations, due to the modern world’s historically high level of interconnectedness and reliance on particular infrastructure.

On the specifics of various “natural” GCRs:

The only GCRs that receive large amounts of philanthropic attention are nuclear security and climate change.

We do not have precise figures aggregated across causes, but our impression is that climate change is an area in which hundreds of millions of dollars a year are spent by U.S. philanthropic funders, while philanthropic funding addressing nuclear security appears to be in the tens of millions.

We don’t know of philanthropic funding for any of the other GCRs exceeding the single digit millions of dollars per year.

Leading focus area contenders

The leading contenders described below are among the most apparently dangerous and potentially unprecedented GCRs (seemingly – to us – more worrisome than the “natural” GCRs listed above, though such a comparison is necessarily a judgment call). At the same time, all appear to have limited “crowdedness,” at least in terms of philanthropic attention, unlike nuclear security (and unlike most of the climate change space, though one of the contenders described below relates to climate change). They are discussed in the order I would pick between them if I had to pick today, though we have not decided how many we expect to commit to by the end of the year, and other GiveWell staff may disagree. Though these are the GCRs I would choose to work on if I were picking today, we don’t have high confidence that they represent the correct set. There are a number of questions (discussed below) that we hope to address before reaching a conclusion at the end of the year.

Biosecurity

By biosecurity, we mean the constellation of issues around pandemics, bioterrorism, biological weapons, and biotechnology research that could be used to inflict great harm (“dual use research”). Our understanding is that natural pandemics (especially flu pandemics) likely present the greatest current threat, but that the development of novel biotechnology could lead to greater risks over the medium or long term. We see this GCR as having a strong case for “importance” because it seems to combine relatively credible, likely, current threats with more speculative potential longer-term threats in a fairly coherent program area. The space receives significant attention from the U.S. government (with ~$5 billion in funding in 2012) but little from foundations: the Skoll Global Threats Fund is the only U.S. foundation we know to be engaging in this area currently, at a relatively low level, though the Sloan Foundation also used to have a program in this area. (We believe the distinction between government and philanthropic funding is at least potentially meaningful, as the two types of actors have different incentives and constraints; in particular, philanthropic funding could potentially influence a much larger amount of government funding.) Although we are not sure of the activities that would be best for a philanthropist to support, many people we spoke with argued that current preparedness is subpar and that there is significant room for a new philanthropic funder.

Although we have had a number of additional conversations since the completion of our shallow investigation, we continue to regard the question of what a philanthropist should fund within this broad issue as an open one. We expect to address it with a deeper investigation and a declared interest in funding.

Geoengineering research and governance

We see a twofold case for the importance of work on geoengineering research and governance:

Although solar geoengineering is in the news periodically, research on the science or governance appears to receive relatively little dedicated funding: our rough survey found about $10 million/year in identifiable support from around the world (mostly from government sources), and we are not aware of any institutional philanthropic commitment in the area (though Bill Gates personally supports some research in the area).

Our conversations have led us to believe that there is significant scientific interest in conducting geoengineering research and that funding is an obstacle, but, as with biosecurity, we do not have a very detailed sense of what we might fund. We’re wary of the concern that further geoengineering research could conceptually undermine support for emissions reductions, but we regard it as relatively unlikely, and also find it plausible that further research could contribute significantly to governance efforts.

We expect to address the question of what a philanthropist could support in this area with a deeper investigation and a declared interest in funding. Note that we don’t envision ourselves as trying to encourage geoengineering, but rather as trying to gain better information and governance structures for it, which could make the actual use more or less likely (and given the high potential risks of both climate change and geoengineering, we could imagine that shifting the probabilities in either direction – depending on what comes of more exploratory work – could do great good).

Potential risks from artificial intelligence

We are earlier in this investigation than in investigations of the above two causes, and have not yet produced a writeup. There is internal disagreement about how likely this cause is to end up as a priority; I don’t feel highly confident that it should be above some of the other contenders not discussed in depth here.

In brief, it appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. Such a scenario could carry great potential benefits, but could carry significant dangers (e.g. technological disemployment, accidents, crime, extremely powerful autonomous agents) as well. The majority of academic artificial intelligence researchers seem not to see the rapid development of powerful autonomous agents as a substantial risk, but to believe that there are some potential risks worth preparing for now (such as accidents in crucial systems or AI-enabled crime; see slides 20-22). However, some people, including the Machine Intelligence Research Institute and computer scientist Stuart Russell, feel that there are important things that should be done today to substantially improve the social outcomes associated with the rapid development of powerful artificial intelligence.

In general, my inclination would be to defer to the preponderance of expert opinion, but I think this area could potentially be promising for philanthropy partly because I have not seen a rigorous public assessment by credible AI researchers to support the (seemingly predominant) lack of concern over risks from the rapid development of powerful autonomous agents. Since this topic seems to be drawing increasing attention from some highly credentialed people, supporting such a public assessment seems like it could be valuable, even if the conclusion is that most researchers are right to not be concerned. The fact that a substantial portion of mainstream AI researchers also seem to think that more traditional risks from AI progress (e.g. accidents, crime) are worth addressing in the near term does increase my interest in the area, though not by much, since I don’t see those issues as GCRs, whereas the rapid development of powerful autonomous agents could conceivably be one. Should we decide to pursue this area further, I would guess that it would be at a lower level of funding than the other potential priority areas described above.

Note from Holden: I currently see this cause as more promising than Alexander does, to a fairly substantial degree. I agree that there are reasons, including the preponderance of expert opinion, to think that there is little preparatory work worth doing today; however, I see the stakes as large enough to justify work in this area even at a relatively low probability of having impact. I would like to see reasonably well-resourced, full-time efforts – with substantial input from mainstream computer scientists – to think about what preparations could be done for major developments in artificial intelligence, and my perception is that efforts fitting this description do not exist currently. We are currently working on trying to understand whether the seeming lack of activity comes from a place of “justified confidence that action is not needed now” or of “lack of action despite a reasonable possibility that action would be helpful now.” My current guess is that the latter is the case, and if so I hope to make this cause a priority.

We will be writing more on this topic in the future.

Why these three risks stand out

Generally speaking, the causes highlighted above (geoengineering, biosecurity and potentially (pending more investigation) artificial intelligence) seem to us to have:

Greater potential for the most extreme direct harms (extreme enough to make a substantial change to the long-term trajectory of civilization likely) relative to other risks we’ve looked at, with the exception of nuclear weapons (an area that we perceive as more “crowded” than these three).

Very difficult to quantify, but potentially reasonably high (1%+), risk of such extreme harm in the next 50-100 years.

Very little philanthropic attention.

Our guess is that most other candidate risks would, upon sufficient investigation, appear less worth working on than at least one of our top candidates – due to presenting less potential for harm, less tractability, or more crowdedness, while being roughly comparable on other dimensions. That said, (a) the specific assessment of artificial intelligence is still in progress and we don’t have internal agreement on it, as discussed above; (b) we have low confidence in our working assessment, and plan both to do more investigation and to seek out more critical viewpoints on our current priorities.

Topics for further investigation

While I currently see the three potential GCRs discussed above as the leading contenders for GCR focus areas, there are a number of questions we would like to answer before committing.

Our shallow investigations have generated a number of follow-up questions that we would like to resolve before committing to causes:

Our current understanding is that major volcanic eruptions are currently neither predictable nor preventable, making this cause apparently rather intractable. To what extent could further research help remedy these shortcomings, and are there other ways a philanthropist could help address the risk from a large volcanic eruption?

How do risks from comets compare to the remaining risks from untracked near earth asteroids? Our understanding is that these risks are likely to be an order of magnitude or two lower than volcanic eruption risks that would cause similar harm, but we aren’t sure how they compare in tractability. What could be done about potential risks from comets?

How credible are existing estimates of the potential harm of geomagnetic storms? In particular, how do experts assess the risks to the power grid from a rare geomagnetic event? How prepared are power companies for geomagnetic storms?

Are there any important gaps in current funding for efforts to improve nuclear security?

In addition, we are still hoping to conduct shallow investigations of nanotechnology, synthetic biology governance (aimed more at ecological threats than biosecurity), and the field of emerging technology governance as a whole, which we think could potentially be competitive with some of the risks described as potential focus areas.