This page features a rough depiction of different views on the question of whether artificial general intelligence (AGI) will take off in a "hard" way (fast, no time for response or competition) or a "slow" way (more gradual, more time to integrate with society, possibility of competing projects). I plot these views against a very crude estimate of how long each forecaster has worked on commercial software (not counting academic computer science) as of 2014, when this graph was first created.

Caveats

My sample is not at all random. It features mainly those people whose views on the hard/soft-takeoff question I know best.

This comparison doesn't prove which side is right. It could be that people with more in-the-trenches experience are less inclined toward big-picture thinking. Roman Yampolskiy wrote regarding my graph: "Sometimes being too close to something prevents you from actually seeing the big picture. Every time I ask an Uber driver if they are worried about self-driving cars I either get a 'no' of they have no idea what I am taking about. Every time I ask a room full of accountants if they see blockchain and smart contracts as a threat I get blank stares."

It's not clear that software experience beyond a few years provides much additional insight, so maybe the right skew in the graph isn't actually very relevant.

It's not clear that commercial rather than academic work is the best kind of experience. The main distinction I wanted to capture was between people who build concrete, real-world systems (whether in academia or industry) vs. those who analyze AI scenarios mathematically or philosophically. By focusing on only commercial software experience, I chose a variable that would be more objective at the expense of being less relevant. Another weak reason to focus on commercial software is that academic software systems are not always built cleanly, robustly, scalably, and with extensive real-world use in mind, though the variance from project to project is high.

rather than academic work is the best kind of experience. The main distinction I wanted to capture was between people who build concrete, real-world systems (whether in academia or industry) vs. those who analyze AI scenarios mathematically or philosophically. By focusing on only commercial software experience, I chose a variable that would be more objective at the expense of being less relevant. Another weak reason to focus on commercial software is that academic software systems are not always built cleanly, robustly, scalably, and with extensive real-world use in mind, though the variance from project to project is high. Finally, and most importantly, this graph should not be construed as suggesting that thinking about AGI risk is unimportant. To the contrary, rogue AIs can take off slowly, and in general, I think shaping AGI trajectories is arguably the most important place where altruists can make a difference regardless of takeoff speed. My goal in sparking discussions about hard/soft takeoff is to encourage further thinking about the probable nature of AGI trajectories so that we can more effectively shape them. And it may be that if AGI takeoff is likely to be soft, then talking about this more openly would help avoid "boy who cried wolf" or "the sky is falling" problems down the road.

Further research

It would be interesting to slice these predictions along many other dimensions as well. For instance:

Try a version that includes academic computer science along with industry work.

Include any engineering experience. Mark_Friedenbach argues that "Elon Musk is a more reliable source about the timelines of engineering projects in general" than Ben Goertzel. I may agree given that Goertzel's timeline predicts AGI development way too soon, though I also think Goertzel has a deeper understanding of cognition than Musk.

imuli created a plot of hard/soft expectations versus birth year. It omits a few of the more recent additions to my own chart.

It would furthermore be helpful to gather statistically valid data from surveys of AI experts. My graph here is just something I put together in a few hours based on what I already knew offhand. It's thus not particularly trustworthy.

Comparison with overall expert predictions

A more comprehensive collection of AI predictions is reported in "Future Progress in Artificial Intelligence: A Survey of Expert Opinion". One question asked for participants' beliefs regarding a hard takeoff:

Assume for the purpose of this question that [human-level machine intelligence] will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?

The median probability for "2 years" was 10% and for "30 years" was 75%. The mean probability for "2 years" was 19% plus or minus a standard deviation of 24%. If the data were distributed normally (which they probably weren't), this would imply about 15.9% of participants estimating a probability more than 43% (= 19% + 24% = one standard deviation above the mean). This breakdown largely aligns with what we see in my graph or may be even more biased toward soft takeoffs than the experts in my graph are.

Data used in the graph

Sources for views on takeoff speed

The views of several of the people in the above chart are described in "Hard vs. soft takeoff" on Wikipedia and in my "Thoughts on Robots, AI, and Intelligence Explosion". For further discussion of takeoff speeds in general, see also Michael Anissimov's "Hard Takeoff Sources".

Following are links to sources that describe the views of each person, possibly with quotes from those sources. Note that the most relevant question where takeoff speed is concerned is how quickly human-level AGIs would advance to a super-human level, rather than how quickly humanity reaches human-level AGI. (Nick Bostrom emphasizes the importance of this distinction in Superintelligence, since he believes that human-level AGI may require up to a century but that super-human AGI would probably follow shortly thereafter.) That said, in the absence of further information, I assumed in my chart that because Elon Musk predicts human-level AGI within a few years and is very worried about what will happen thereafter, he also expects a hard takeoff following human-level AGI.

Sources for years worked in software

Following are some rough estimates of how much time the forecasters have spent working on commercial software, although these numbers are bound to ignore relevant distinctions and may omit important information that I couldn't find from a cursory investigation. Let me know if you have corrections or suggested additions. I didn't include Vernor Vinge because I wasn't sure whether or how long he worked on commercial software, but I suspect he might be an outlier relative to the trend in the graph, especially if his academic computer-science work is counted.

Adding more data points

Feel free to write and suggest additional data points to add to the graph, possibly including yourself. This chart is not designed for statistical validity but instead for transparent presentation of many anecdotes. One nice feature of labeling people in a graph is that readers can judge for themselves which data points they trust most and which ones they find uncredible. If I were computing statistics on the data (which I'm not), I'd need to be more selective about who was included.

Two conflicting camps

All generalizations are false, but there seem to be roughly two opposed camps on AGI amoung futurists. The following table summarizes properties that tend to cluster together, although items in a given column are certainly not equivalent:

MIRI and FHI Humanity+ and other transhumanist organizations Relatively high probability on hard takeoff Relatively high probability on soft takeoff AGI has high probability of harming existing humans AGI has lower probability of harming existing humans Most impressed by the power of elegant math Most impressed by the power of messy, complex systems Generally younger (mostly in 20s and 30s) Somewhat older (relatively more people in 40s and 50s)

The groups share one thing in common: They both often believe that the other side is very misguided about the nature of AGI and therefore isn't producing useful contributions to the field. Sometimes one side thinks the other is causing active harm (MIRI fears speeding up AGI, and regular transhumanists fear demonizing and alienating AGI researchers).

I personally think both sides get some things right but that each side has its blind spots. There are very smart people in each camp, and they would be better served by breaking down barriers and updating somewhat in each other's directions.