The impact of openness on both the control problem and the political problem must be analysed. Here we identify three main pathways by which openness in AI development may have such impact or otherwise intersect with long‐term strategic considerations: (1) openness may speed AI development; (2) openness may make the race to develop AI more closely competitive; (3) openness may promote wider engagement.

In summary, the fact that openness may speed up AI development seems positive for goals that strongly prioritize currently existing people over potential future generations, and uncertain for impersonal time‐neutral goals. Either of these effects appear relatively weak compared to other strategy‐relevant impacts from openness in AI development, because we would not expect marginal increases in openness to have more than a modest influence on the speed of AI development.

Accelerated AI would increase the chance that superintelligent AI will preempt existential risks stemming from non‐AI sources, such as risks that may arise from synthetic biology, nuclear war, molecular nanotechnology, or other risks as‐yet unforeseen. This preempting effect depends on the arrival of superintelligent AI actually eliminating or reducing other major anthropogenic existential risks. 12 (Whether it does so may depend partly on whether the post‐AI‐transition world is multipolar or unipolar, a topic to which we shall return to below.)

Expedited AI development would give the world less time to prepare for advanced AI. This may reduce the likelihood that the control problem will be solved. One reason is that safety work is likely to be relatively open in any case, and so would not gain as much as non‐safety AI work from additional increments of openness in AI research generally. Safety work may thus be decelerated compared to non‐safety work, making it less likely that a sufficient amount of safety work will have been completed by the time advanced AI becomes possible. 10 There are also some processes other than direct work on AI safety that may improve preparedness over time – and which would be given less time to play out if AI happens sooner – such as cognitive enhancement and improvements in various methodologies, institutions, and coordination mechanisms (Bostrom, 2014a ). 11 (The impact on the political problem of earlier AI development is harder to gauge, since it depends on difficult‐to‐predict changes in the broader social and geopolitical landscape over the coming decades.)

This is important if currently existing people have a strongly privileged status over future generations in one's decision criteria. Since the human population is dying off at a rate of almost 1% per year, even modest effects on the arrival date of superintelligence could have important decision‐relevance for such a ‘person‐affecting’ objective function (assuming superintelligence would, with substantial probability, dramatically reduce the death rate or improve wellbeing levels) (Bostrom, 2003 ). Earlier onset of benefits would also be important if one uses a significant time discount factor. (However, making the benefits start earlier is not clearly significant on an impersonal time‐neutral view, where instead it looks like the focus should be on reducing existential risk (Bostrom, 2013 ).)

Openness making AI development race more closely competitive

One weighty consideration is that the final stages of the race to create the first superintelligent AI are likely to be more closely competitive in open development scenarios. The reason for this is that openness would equalize some of the variables that otherwise would cause dispersion in the levels of capability or progress‐rates among different AI developers. If everybody has access to the same algorithms, or even the same source code, then the principal remaining factors that could produce performance differences are unequal access to computation and data. One would therefore expect there to be a larger number of actors with the ability to wield near state‐of‐the‐art AI in open development scenarios (Armstrong et al., 2016). This tightening of the competitive situation could have the following important effects on the control problem and the political problem.

Removes the option of pausing In a tight competitive situation, it could be impossible for a leading AI developer to slow down or pause without abandoning its lead to a competitor. This is particularly problematic if it turns out that an adequate solution to the control problem depends on the specifics of the AI system to which it is to be applied. If there is some necessary part of the control mechanism that can only be invented or installed after the rest of the AI system is highly developed, then it may be crucial that the developer has the ability to pause progress on making the system smarter until the control work can be completed. Suppose, for example, that designing, implementing, and testing a control solution requires six months of additional work after the rest of the AI is fully functional. Then, in a tight competitive situation, any team that chooses to undertake that control work might simply abandon the lead – and with it, possibly, the ability to influence future events – to some other less careful developer. If the pool of potential competitors with near state‐of‐the‐art capabilities is large enough, then one would expect it to contain at least one team that would be willing to proceed with the development of superintelligent AI even without adequate safeguards. The larger the pool of competitors, the harder it would be for them to all coordinate to avoid a risk race to the bottom.

Removes the option of performance‐handicapping safety Another way in which a tight competitive situation is problematic is if the mechanisms needed to make an AI safe reduces the AI's effectiveness. For example, if a safe AI runs a hundred times slower than an unsafe AI, or if safety requires an AI's capabilities to be curtailed, then the implementation of safety mechanisms would handicap performance. In a close competitive situation, unilaterally accepting such a handicap could mean forfeiting the lead. By contrast, in a less competitive situation (such as one in which a large coalition has a sizeable lead in technology or computing power) there might be enough slack that the frontrunner could implement some efficiency‐reducing safety measures without abandoning its lead. The sacrifice of performance for safety may need to be only temporary, a stopgap until more sophisticated control methods are developed that eliminate the efficiency‐disadvantage of safe AI. Even if there were inescapable tradeoffs between efficiency and safety (or ethical constraints preventing certain kinds of instrumentally useful computation), the situation would still be salvageable if the frontrunner has enough of a lead to be able to get by with less than maximally efficient AI for a period of time: since during that time, it might be possible for the frontrunner to achieve a sufficient degree of global coordination (for instance, by forming a ‘singleton’, discussed more below) to permanently prevent the launch of more efficient but less desirable forms of AI (or prevent such AI, if launched, from outcompeting more desirable forms of AI) (Bostrom, 2006).

Lowers probability of a small group capturing the future I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower. (Levy, 2015) There are some other consequences of tighter competition in the runup to superintelligent AI that are of more uncertain valence and magnitude, but potentially significant. One such consequence is for the political problem. A tighter competitive situation would make it less likely that one AI developer becomes sufficiently powerful to monopolize the benefits of advanced AI. This is one of the stated motivations for the OpenAI project, expressed for example, by Elon Musk, one its founders: Openness may thus make it more likely that many people's preferences influence the future. Depending on one's values and expectations (e.g. one's expectations about which preferences would rule if the future were instead captured by a small group), this could be an important consideration.

Affect influence of status quo powers? Another consequence for the political problem: openness in AI development may also influence what kind of actor is most likely to achieve monopolization (if such there be) or to achieve a relatively larger influence over the outcome. Access to computing power (and possibly data) becomes relatively more important if access to algorithms or source code is equalized. In expectation, this would align influence over the post‐AI world more closely with wealth and power in the pre‐AI world, since computing power is fairly widely distributed (including internationally), quite fungible with wealth, and somewhat possible for governments to control – in comparison with access to algorithmic breakthroughs in a closed development scenario, which might be more lumpy, stochastic, and local. The likelihood that a single corporation or a small group of individuals could make a critical algorithmic breakthrough needed to make AI dramatically more general and efficient seems greater than the likelihood that a single corporation or a small group of individuals would obtain a similarly large advantage by controlling the lion's share of the world's computing power.13 Thus, if one thinks that it is preferable in expectation that advanced AI be controlled by existing governments, elites, and ordinary people – in proportion to their existing wealth and political power – rather than by some particular group that happens to be successful in the AI field (such as a corporation or an AI lab) then one might favour a scenario in which hardware becomes the principal factor of AI power. Openness in AI development would make such a scenario more likely. However, openness would also reduce the economies of scale in AI research labs, and this would favour smaller players who may be less representative of status quo power. Consider the opposite case: development is perfectly closed, and any wannabe AI developer must make all the relevant discoveries and build all the needed components in‐house. Unless the successful AI architecture turns out to be extremely simple, this regime would strongly favour larger development groups – the odds of a given group winning the race would scale superlinearly with group size. By contrast, if development is open and the winning group is the one that adds a single final insight to a shared corpus of ideas, then the probability of a given group being the winner might instead scale roughly linearly with size.14 So in scenarios where there is a hardware overhang, and an intelligence explosion is triggered by a final algorithmic invention, openness would increase the probability of a small group capturing the future. Consequently, if larger development groups (such as large corporations or national projects) are typically more representative of, or controlled by, status quo powers than a randomly selected small development group (such as a ‘guy in a garage’) then openness may either increase or decrease the degree of influence status quo powers would have over the outcome, depending on whether hardware or software is the bottleneck. Since it is currently unclear what the bottleneck will be, the impact of openness on the expected degree of control of status quo powers is ambiguous.

Reduces probability of a singleton A singleton is a world order in which there is at the highest level of organization one coordinated decision‐making agency. In other words, a singleton is a regime in which major global coordination or bargaining problems are solved. The emergence of a singleton is thus consistent with both scenarios in which many human wills together shape the future and scenarios in which the future is captured by narrow interests. The point that openness in AI development seems to lower the probability of a singleton is therefore distinct from the point made that openness seems to lower the probability of a small group capturing the future. One could be against a small group capturing the future and yet for the formation of a singleton. There are a number of serious problems that can arise in a multipolar outcome that would be avoided in a singleton outcome. One such problem is that it could turn out that at some level of technological development (and perhaps at technological maturity) offence has an advantage over defence. For example, suppose that as biotechnology matures, it becomes inexpensive to engineer a microorganism that can wreak havoc on the natural environment while it remains prohibitively costly to protect against the release and proliferation of such an organism. Then, in a multipolar world, where there are many independent centres of initiative, one would expect the organism eventually to be released (perhaps by accident, perhaps as part of a blackmail operation, perhaps by an agent with apocalyptic values, or maybe in warfare). The chance of avoiding such an outcome would seem to decrease with the number of independent actors that have access to the relevant biotechnology. This example can be generalized: even if in biotechnology offence will not have such an advantage, perhaps it will in cyberwarfare? in molecular nanotechnology? in advanced drone weaponry? or in some other as‐yet unanticipated technology that would be developed by superintelligent AIs? A world in which global coordination problems remain unsolved even as the power of technology increases towards its physical limits is a world that is hostage to the possibility that – at some level of technological development – nature too strongly favours destruction over creation. From the perspective of existential risk reduction, it may therefore be preferable that some institutional arrangement emerges that enables robust global coordination. This may be more tractable if there are fewer actors initially in possession of advanced AI capabilities and needing to coordinate. The possibility that offence might have an inherent advantage over defence is not the only concern with a multipolar outcome. Another concern is that in the absence of global coordination it may be impossible to forestall a population explosion of digital minds and a resulting Malthusian era in which the welfare of those digital minds may suffer (Bostrom, 2004, 2014a; Hanson, 1994). Independent actors would have strong incentives to multiply the number of digital workers under their control to the point where the marginal cost of producing another one (including electricity and hardware rental) equals the revenue it can bring in by working maximally hard. Local or national legislation aimed at protecting the welfare of digital minds could shift production to jurisdictions that offer more favourable conditions to investors. This process could unfold rapidly since software faces fewer barriers to migration than biological labour, and the information services it provides are largely independent of geography (though subject to latency effects from long‐distance signal transmission, which could be significant for digital minds operating at high speeds). The long‐run equilibrium of such a process is difficult to predict, and might be primarily determined by choices made after the development of advanced AI; but creating a state of affairs in which the world is too fractured and multipolar to be able to influence where it leads should be a cause for concern, unless one is confident (and it is hard to see what could warrant such confidence) that the programs with the highest fitness in a mature algorithmic hyper‐economy are essentially coextensive with the programs that have the highest level of subjective well‐being or moral value.