(This is a introduction, for those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)

The good news is that there were no Singularity Wars.

The Bay Area had a Singularity University and a Singularity Institute, each going in a very different direction. You'd expect to see something like the People's Front of Judea and the Judean People's Front, burning each other's grain supplies as the Romans moved in.

The Singularity Institute for Artificial Intelligence was founded first, in 2000, by Eliezer Yudkowsky.



Singularity University was founded in 2008. Ray Kurzweil, the driving force behind SU, was also active in SIAI, serving on its board in varying capacities in the years up to 2010.



SIAI's multi-part name was clunky, and their domain, singinst.org, unmemorable. I kept accidentally visiting siai.org for months, but it belonged to the Self Insurance Association of Illinois. (The cool new domain name singularity.org, recently acquired after a rather uninspired site appeared there for several years, arrived shortly before it was no longer relevant.) All the better to confuse you with, SIAI has been going for the last few years by the shortened name Singularity Institute, abbreviated SI.



The annual Singularity Summit was launched by SI, together with Kurzweil, in 2006. SS was SI's premier PR mechanism, mustering geek heroes to give their tacit endorsement for SI's seriousness, if not its views, by agreeing to appear on-stage.

The Singularity Summit was always off-topic for SI: more SU-like than SI-like. Speakers spoke about whatever technologically-advanced ideas interested them. Occasional SI representatives spoke about the Intelligence Explosion, but they too would often stray into other areas like rationality and the scientific process. Yet SS remained firmly in SI's hands.

It became clear over the years that SU and SI have almost nothing to do with each other except for the word "Singularity." The word has three major meanings, and of these, Yudkowsky favored the Intelligence Explosion while Kurzweil pushed Accelerating Change.



But actually, SU's activities have little to do with the Singularity, even under Kurzweil's definition. Kurzweil writes of a future, around the 2040s, in which the human condition is altered beyond recognition. But SU mostly deals with whizzy next-gen technology. They are doing something important, encouraging technological advancement with a focus on helping humanity, but they spend little time working on optimizing the end of our human existence as we know it. Yudkowsky calls what they do "technoyay." And maybe that's what the Singularity means, nowadays. Time to stop using the word.



(I've also heard SU graduates saying "I was at Singularity last week," on the pattern of "I was at Harvard last week," eliding "University." I think that that counts as the end of Singularity as we know it.)



You might expect SU and SI to get in a stupid squabble about the name. People love fighting over words. But to everyone's credit, I didn't hear squabbling, just confusion from those who were not in the know. Or you might expect SI to give up, change its name and close down the Singularity Summit. But lo and behold, SU and SI settled the matter sensibly, amicable, in fact ... rationally. SU bought the Summit and the entire "Singularity" brand from SI -- for money! Yes! Coase rules!



SI chose the new name Machine Intelligence Research Institute. I like it.



The term "Artificial Intelligence" got burned out in the AI Winter in the early 1990's. The term has been firmly taboo since then, even in the software industry, even in the leading edge of the software industry. I did technical evangelism for Unicorn, a leading industrial ontology software startup, and the phrase "Artificial Intelligence" was most definitely out of bounds. The term was not used even inside the company. This was despite a founder with a CoSci PhD, and a co-founder with a masters in AI.

The rarely-used term "Machine Intelligence" throws off that baggage, and so, SI managed to ditch two taboo words at once.



The MIRI name is perhaps too broad. It could serve for any AI research group. The Machine Intelligence Research Institute focuses on decreasing the chances of a negative Intelligence Explosion and increasing the chances of a positive one, not on rushing to develop machine intelligence ASAP. But the name is accurate.



In 2005, the Future of Humanity Institute at Oxford University was founded, followed by the Centre for the Study of Existential Risk at Cambridge University in early 2013. FHI is doing good work, rivaling MIRI's and in some ways surpassing it. CSER's announced research area, and the reputations of its founders, suggest that we can expect good things. Competition for the sake of humanity! The more the merrier!



In late 2012, SI spun off the Center for Applied Rationality. Since 2008, much of SI's energies, and particularly those of Yudkowsky, had gone to LessWrong.com and the field of rationality. As a tactic to bring in smart, committed new researchers and organizers, this was highly successful, and who can argue with the importance of being more rational? But as a strategy for saving humanity from existential AI risk, this second focus was a distraction. SI got the point, and split off CFAR.



Way to go, MIRI! So many of the criticisms I had about SI's strategic direction and its administration in the years I first encountered it in 2005 have been resolved recently.

Next step: A much much better human future.

The TL;DR, conveniently at the bottom of the article to encourage you to actually read it, is: