The following is an essay I wrote for a course titled AI:Law, ethics and policy.

Nick Bostrom in his book defines Superintelligence as, “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” The notion of Superintelligent AI (“SAI”) makes us anxious and we fear that AI will become runaway and threaten humanity. This event, deemed “Singularity,” has sparked existential debates, conspiracy theories, and raised ethical concerns. But is this Singularity achievable? If so, what should we do about it?

Will we achieve Singularity?

Roy Amara, the founder of the Institute for the Future, coined an adage now referred to as “Amara’s Law”: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” We have often read headlines like “Self driving cars will be here in the next 10 years.” The same headline has been around since 1970. The same is true as to runway AI. Having seen progress in so many other dimensions of our lives, we have come to think it is true for AI as well.

The speed of progress of AI is not symmetric. AI may be good at some tasks, while humans reign superior at many others. We mistakenly juxtapose progress in one subfield as progress in general. People who point to the overnight success of deep learning forget that it took 50 years to reach the point it is at today. These people also ignore the fundamental theory of punctuated equilibrium, which states that stable systems tend to stay in stasis. An example in technology would be airplanes, where there have been no major developments since jets, which appeared 60 years ago.

Moreover, there is no clear path to general intelligence, or any proposed method to culminate all of known AI progress into a single SAI. Rodney Brooks, probably the world’s most accomplished roboticist, says we are irrationally optimistic about the speed at which AI will progress or even take over the world. In the seven deadly sins of AI prediction, Brooks discusses the pitfalls in predicting AI growth and assures us that even if machines were to take over, we are safe for at least a few hundred years.

What’s the big deal?

Bostrom worries about the “alignment problem”: that the SAI’s goals will not align with ours. He hypothesises an ambivalent “paper-clip maximiser,” which in its quest to create paper clips, will harness all the energy of the universe, eradicating humanity in the process. Some fear more vindictive outcomes, as outlined in a 2010 though experiment on Less Wrong known as “Roko’s basilisk.” According to Roko, SAI might rationalise and then retroactively punish all humans who did not help its genesis. Max Tegmark, a renowned MIT cosmologist, imagines 12 AI aftermath scenarios from a Libertarian Utopia, were humans and SAI beings coexist, to Zookeeper were humans are kept as zoo animals in cages, much like in Kurt Vonnegut’s Slaughterhouse 5. However, as succinctly put in an April 9, 2018 piece in Document Journal, “Privileged people who fear an AI rebellion always imagine it in exploitative terms that mirror their own ideologies.” It is noteworthy that most of these futurists and doomsday pundits are speculators and not AI researchers, for example, Bostrom himself, Elon Musk, and Stephen Hawking.

What can we do if AI becomes superintelligent?

Nothing!

Since AI will have already surpassed the cognitive ability of humans, surely we will not be able to beat it at survival chess. Bostrom argues that we would have to tactfully negotiate the terms of our existence. Some are even willing to worship the AI overlords and have already established the Church of Artificial Intelligence in the Silicon Valley. They believe that this AI will be the interventionist God that actually “listens” (the one Nick Cave sings about). “Better pet than livestock” they quip. Thus, creating checks, installing fail-safes and kill switches, might save us from a sleepless night of existential crisis but during the dawn of SAI, we will be at the mercy of indifferent gods

What can we do now?

AI is still prepubescent. We still hold the plug in or hands. Why not give into the fears of Bostrom and pull it, for the greater good? Sure, Elon Musk donates tens of millions of dollars to mitigate the growth of malicious AI, but he would also spend billions on making cars smarter. In the words of Yuval Noah Harari, historian and writer of Sapiens, “we will probably make the most profound decisions on the basis of myopic short-term considerations. The future of life on Earth will be decided by small-time politicians spreading fears about terrorist threats, by shareholders worried about quarterly revenues and by marketing experts trying to maximise customer experience.” AI is such a profitable tool that the Homo economicus will not let it go. There will be continuous research for both good and nefarious purposes. If we impose regulations now, surely there will be a pro-AI lobby that will vehemently fight the Bostroms with any legal weapon available. If the regulations are extreme, the research might go underground, and even be romanticised as an antiauthoritarian movement. Also, we lose the potential benefits such as improved health care.

AI has also benefited our lives in so many ways, and has the potential to do so much more. It is already so ingrained in societal fabric that it will be difficult to implement extreme regulations. The logical thing to do is what Grady Booch, a prolific software engineer and developer of UML, says: to inoculate AI with human values so we can learn to coexist. Transhumanism supporters see a future in which AI and humans will not be separate beings, but with advancement of biotechnology, they become one. We do not know what the future of AI will look like, and we cannot guess its intent, so it is better not to descend into scaremongering and assume maleficence on the SAI’s part. By the time AI becomes superintelligent, the world will already have changed radically, for better or for worse.