Artificial intelligence, broadly defined, is an effort aimed at replicating cognitive capacities of humans and other biologically evolved animals via computer algorithms. On the path from simple calculators to human level artificial intelligence (AI) and beyond, many ethical and societal issues will arise and likely hinder the development of AI. Most concerning of these might be the existential threat that AI and its potentially superior successors will pose to human kind. Our relationship with future AIs could be akin to our relationship with apes and horses, we look down at the former as inferiors and use the latter for our benefit and entertainment. Our fate as a species might one day depend on our AI overlords who could have evolved or designed to be benevolent or malevolent towards us. Still, as many proponents and opponents of AI might agree, there is a long road ahead of us until we have AIs superior to humans. Until then our concern should be twofold; first, how to make sure the resulting super intelligent agents or entities will do no harm to us, and second, how to prevent a one-sided winner-take-all scenario where all the gains from AI go to the few who got it right first. In this essay, the path to super intelligence will be explored, but in addition, intermediate stages between the current state of AI and super intelligence will also get their fair share of scrutiny.

Intro

Before efforts to create human level general artificial intelligence come to fruition, a whole host of narrow and extremely specialized AIs will likely disrupt our way of living and could change the fabric of societies in irremediable ways which were not foreseen. In specific, businesses leveraging narrow AIs could reach imbalanced and extremely out of proportion gains while in effect eliminating the role of human labor from the current economic system. (Hughes, 2004) Similarly, governments’ activities around AI, and their physical embodiments like robots or autonomous vehicles, should also be under watch. In their case, the threat becomes global security and peace. Imagine that United States’ government agencies manage to create the ultimate war machine with unclear amount of autonomy. This AI powered machine could be directed with the goal of eliminating enemy troops or facilities, but left on its own to come up with the means of accomplishing such a horrific task. Such hypothetical scenarios should make a single threat clear, unethical human organizations armed with the power of sub-human-level AIs could bring an end to our world much earlier than a super intelligent AI born in a mad scientist’s lab!

Singularity

Computer hardware and software are advancing at rates far beyond any other man made technology or tool. This might be due to their extendible nature, improvements accumulate over generations and lead to better, faster and more efficient computing machines. However, exponential growth in information processing capacity of computers has not resulted in exponential growth of productivity for human workers who use computers. Simply put, we are not capable or utilizing the full potential of computers. We have our biological brains, evolved over millions of years, which to date are more intelligent and far more versatile in comparison to any AI. Machines on the other hand seem to get better at a much faster rate. They do not yet possess their own goals and intentions. They need humans to design their circuitry and software. But the question to ask here is that can we keep up with the rate of improvement of machines? Or do we one day need to create machines that in return build better machines. If the current trend continues, there will be an intelligence explosion after artificially intelligent machines gain the capability to design and build better machines. Some call this scenario Singularity. (Chalmers, 2010)

The last invention

Good was the first person to formally assess the possibility of existence of machines far more intelligent than humans in all intellectual capacities and the resulting consequences (Good, 1965). The idea is that if we do one day create such an ultraintelligent machine, one of the capabilities of such a machine or AI would be creating ever more intelligent machines itself. Such a machine could create AIs to explore all sorts of possible dimensions and varieties of intelligence that humans have not even thought of. This would lead to a so-called intelligence explosion event. Homo sapiens would be left far behind and needless to say, most likely useless thereafter. Hence, inventing this ultraintelligent, AI-sprouting machine would be the last invention that we ever need to make.

Humans Need Not Apply

Will Singularity happen?

Perhaps the most crucial aspect of this question lies in our motivation, collectively as human society, to create ever more capable machines. We created computers to help us accomplish tasks faster and more efficiently. Likewise, we create intelligent machines so that they can accomplish tasks on their own much better than we could ever do on our own or with the help of tools like computers. Economic pressure alone will drive the pace of AI development for ever more efficient and cost effective processes, factories, operations, transactions, etc. Across many industries, the amount of work which could be traditionally achieved by hundreds of workers in weeks can now be delegated to a single industrial robot for a fraction of the cost and in matter of hours. Robots, intelligent software programs, virtual assistants, no matter what we call them or their shape and form, are essential to the industrial and digitalized world in which we live. So far, the benefits of intelligent machines have consistently outweighed their risks and costs. If this upward trend continues, nothing can stop us from creating ever better machines other than a global catastrophic event capable of destroying the whole human race.

Is Superintelligence technically feasible?

Human level intelligence was obviously attainable, though not through design. Indeed, direct design is already being proven to be an unlikely method of designing very complex systems like our brains or future AIs. Given the right circumstances we could create the proper environment in a virtual, simulated world and allow AI to evolve on its own. But, at a much faster rate than biological evolution and with some high level supervision to avoid waste of resources and time. If we believe that human brain is nothing but organized matter, then there is no apparent reason for not being able to replicate such a structure or system in silicon or some other artificial host for intelligence. It is also quite unlikely that we are anywhere close to the peak of intelligence. As seen in exceptional individuals throughout history, there is a chance to have certain aspects of our cognitive capacities developed in an unusually superior way compared to the average human. Intelligent man-made or self-evolved machines will only be limited by the amount of resources available to them. With the right architecture and conditions, superintelligent machines could reach such a high level of intelligence that we would look like ants to them.

Recipe of Obliteration

In the second part of this essay we will have a look at some of the essential ingredients of a catastrophic AI takeover scenario. The assumption here is that the development of human level artificial intelligence is within reach of humanity. Thereafter, emergence of subsequent generations of AI, which we will call AI+ and AI++[1], is primarily a matter of time and external conditions. In a sense, any rate of improvement upon the existing state of computer hardware and software is enough to eventually get us to AI. Consequently, for AI to surpass the limits of what humans can collectively achieve, it is only enough to deploy many instances of the same AI program on supercomputers and watch them make more advances in a matter of days or weeks than the entire human race has accomplished in millennia. Through automation, AI will penetrate every aspect of our everyday lives, not just cognitively demanding tasks. One potent example is autonomous or driverless vehicles replacing millions of drivers. Once a company has got it right and designed the perfect autonomous car (or at least one better than human drivers), they will be able to deploy millions or billions of their horseless carriages (human-less to be literal) and reap the results for eternity. Disproportionately large benefits of AI and the massive scalability that comes with automation makes it simply too attractive to not pursue. In other terms, the coming of AI is inevitable.

The Host

The powerful force of AI should be handled with extreme care, its motives and goals better be aligned with ours or we will be flies to AI’s tornado. Intelligence and intellectual capacity of AI alone cannot pose any serious threat to earthlings until coupled with the permissive operational framework and physical infrastructure to influence the real world. Obviously, an encapsulated AI in isolation is of no use to us. We must be able to interact with it to reap the benefits of AI. For any entity to take part in economic and legal transactions it needs to be a real or legal person. While it is not completely out of question to eventually assign personhood to AI agents, it is more likely that organizations leveraging artificial intelligence technology will be so dependent on it that in effect they are controlled by the AI(s). These organizations could eventually turn into puppets of their own slave. Full-stack or end-to-end companies are especially ripe for an AI takeover. These organization achieve higher efficiency by controlling the entire value chain from manufacturing and product development to customer experience. They are opaque to regulators’ eyes and often have powerful lobbying and marketing arms to change regulations and social perception for their own benefit. The inertia of such mega-companies makes them very difficult to control for the governments thus creating the perfect conditions for an eventual hijack by a superintelligent machine.

Internet of Zombies (IoZ)

The other major ingredient for doomsday is physical reach and connectivity of AI to the real world. The Internet of Things (IoT) movement is the ultimate manifestation of this concern. In a world where everything from traffic lights and autonomous cars to nuclear warheads are connected to the internet or some form of a network, any evil entity powerful enough to surpass the security guards could do irrecoverable damage. In a way, we are creating the perfect conditions for our own obliteration.

Socio-economic Singularity

Let us look at where our world is heading to; a society heavily dependent on automation and AI to meet ever growing demand for cheaper goods and more cost effective services; a population with diminished morale due to widespread unemployment; massive divide between the super wealthy minority and the poor, struggling majority; corrupt political institutions at the service of those with the riches and power. All of these conditions culminate in a dystopian future where planet earth has become like a weakened host body ripe to be taken over by the parasite, waiting for a malicious AI to bring an end to life on earth as we know it.

Future of AI and Universal Basic Income (UBI)

Transition to a Post-Singularity Society

Artificial Intelligence could be the ultimate solution to all our problems. Indeed, no human being might ever have to struggle with pain and misery after the invent of AI. Humans could continue life in a symbiotic relationship with AI and eventually even unite with it through mind uploading or intelligence amplification brain implants. None of these are going to happen overnight. The important task at hand is to prepare the society for a graceful transition to the next stage in the evolution of intelligent life forms. To survive singularity, we need to thoughtfully examine all the options against their risks and benefits. The path to a prosperous post-singularity society does not only rely on technological advancements. All the stakeholders should cooperate and stay united against the threat of an AI-Armageddon. This could be achieved by forming industry wide AI ethics alliances [2], increasing public awareness about the benefits and hazards of AI and automation in general, careful and forward-thinking regulations, and most importantly a fair system of wealth distribution and value creation. Whether we want to face it or not, a huge number of jobs will be completely obliterated out of existence through automation and applications of AI. Millions and gradually billions of members of the global workforce will not have the opportunity or ability to adapt to the requirements of newly created jobs. They are not temporarily unemployed, but in fact unemployable labor in the age of automation and AI. It is the society’s collective burden to help these vulnerable members of the society who are in a sense sacrificed for continued progress and collective prosperity of the world. A universal basic income (UBI) could address the unemployment crisis ahead of us in near future. A fixed sum would be paid to all the citizens of a country, adjusted to afford standards of living in every region, to allow those in need cope with their lack of sustainable income and rapidly changing job market. There are many issues surrounding the topic of UBI, ranging from its impact on individuals’ sense of dignity after long periods of unemployment to the massive additional cost to taxpayers and the burden on government to provide all its citizens with a livable income. To discuss these issues here would be beyond the scope of this essay. Artificial Intelligence could destroy jobs but at the same time it could save taxpayers and governments alike huge sums of money. For instance, it is expected that autonomous cars will decrease the number of accidents significantly and reduce the cost of transport by about three trillion dollars (Frazzoli, 2014) every year in the United States alone. Such cost savings could be redirected to fund UBI, or if not managed fairly make a few billionaires, the first trillionaires of our era!