Last Updated on June 17, 2019 by Henry John

There is a fear of Artificial Intelligence; a fear that it will one day literally takeover the world. People see Artificial Intelligence becoming super intelligent and getting out of our control. Or rather taking over control.



Prominent figures have come out to express their fear of Artificial Intelligence. Most notable are Professor Stephen Hawking and Elon Musk. These persons are close to the cutting edge in artificial intelligence. In some sense, I will say they know what they are talking about.



And I think, it’s not entirely fear. To a large extent, it’s people reasonably being cautious. Musk, for instance, runs Tesla, an electric car company, that is incidentally in the race for self-driving vehicles (Artificial Intelligence vehicles). He is not avoiding Artificial Intelligence or insinuating that we stop developing it. Is he?



I’do take ‘no he is not’ as my answer.



Musk and co fear that, if we develop Artificial Intelligence as fast as we are doing without precaution, we may not really understand it. And one cannot truly control what he don’t understand.

With no control over Artificial Intelligence, AI systems perhaps, will become self-willed.



“And in the future, AI could develop a will of its own – a will that will conflict ours” Late Prof. Stephen Hawking



And doomsday begins.

Artificial Intelligence Vs Human Intelligence

Decades ago since Artificial Intelligence was coined, AI systems have both failed to live up to expectations and defied expectations.

Artificial intelligence has come a long way and I think so has human Intelligence. We have attempted to try and compare them directly (head to head). I think on the basis of intelligence being a general thing.



Some findings of this comparisons are that;

Artificial Intelligence systems carryout Intelligent task at a speed and accuracy incredibly greater than humans.

Artificial Intelligence systems are capable of operating 24/7, of course humans cannot.

Human brain typically consumes 25 watts of energy in contrast to 2 watts of modern deep learning machine.



Intelligence is not general. There are different types (or forms) of intelligence. And It is perhaps shortsightedness to just compare human and Artificial Intelligence on a specific form of intelligence and jump into conclusions.

It even gets complicated when we think of the fact that there is no generally agreed definition of intelligence. Therefore, how can we efficiently compare Intelligence of machines and humans when we haven’t defined Intelligence.



If I were to try and compare human intelligence and Artificial Intelligence, I will go back to answering the question that perhaps starts it all: Can Machines Think?



Turin proposed a backdoor to answering this question, when he proposed the Turing Test. The Turing Test which he termed “The Imitation Game”, is a test (or a game) which Turing proposed to determine if a machine can think.



The principle of the test is that, if a machine can carry on a conversation (over a teleprinter) with a human being and is able to fool the human being into believing it is human, then one can reasonably say, that the machine was thinking.



This they can do; they can pass the Turin Test.



But literally, I doubt if AI systems will ever be able to think, let alone outside the box; to think outside “if and else”. A machine will always be a machine, best we can have is an intelligent machine. Remember, a machine has always being a thing made by humans to make work easier, more accurate and faster. Hence, it’s really no big deal that AI systems can do certain things faster and more accurate than humans: afterall, that’s why they are machines.



In the end and answering Turing’s question, Machines cannot literally think and they never will. For certain forms of intelligence go beyond logic, there are forms of intelligence that cannot be expressed logically and AI systems strive only on logic. They are just a system running on complex human designed “if and else”, capable of incomprehensible feats and perhaps no match to true and complete intelligence (the human intelligence).

AI Takeover: The Perspectives

Researchers in the field of Artificial intelligence has hypothesized about how AI could takeover the world from different perspectives. The most notable hypotheses are the technological and existential perspective.



Sci-Fi movies also paint pictures of possible AI takeover from a fictional perspective; which has to do with a direct conflict between humans and AI systems.

AI Takeover



All these perspectives plays in mind when most people think about how Artificial Intelligence will takeover the world. They are mostly based on intelligent assumptions (if A and B takes place then AI will takeover the world). Of course, one can only try to predict the future through intelligent assumptions (facts supported assumptions).

AI takeover in Science Fiction

The theme Artificial Intelligence is a recurring one in science fiction. It has been painted as both beneficial and a threat to our existence. Science fiction played a significant role in both arousing our interest in artificial intelligence and subsequently our fear of Artificial Intelligence.



Fictional AI takeover have consistently gained mainstream viewership and readership respectively. The list journeys from the play ‘R.U.R’ through Space Odyssey, Blade Runner, The Terminator, The Matrix, I Robot and Ex Machina among others.



It’s a popular notion that ‘conflict drives plot’, and it’s no wonder that most Sci-Fi AI-Takeovers runs on direct conflict between humans and AI systems.

Conflict results from a clash of interest and hence, we see AI systems in Sci-Fi movies decide that humans are a threat to them and develop a conscious desire to takeover the world with a human-like motive.



I think what separates the Sci-Fi narratives with the one hypothesized by AI researchers is the conscious desire of AI systems to takeover control and perhaps the absence of a direct conflict.

Existential Risk

Some AI researchers have hypothesized that AI systems will eventually pose a threat to human existence. They stress that superintelligent AI systems may not necessarily be motivated with a human-like motive to extinct humanity.



This perspective of the AI takeover is far more realistic than the Sci-Fi perspective. Notable alarms of the danger of Artificial Intelligence arises from this perspective.



He noted that, “If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings.”



“It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill.” Musk stressed.



I think this perspective maintains that AI systems doesn’t necessarily have to be conscious for them to pose a threat to humanity. With ‘if and else’ they can do the damage.

Technological Takeover

This perspective of Artificial Intelligence takeover arises from the summation that eventually AI systems are going to automate everything: in essence take over almost every job.

Some AI researchers are predicting that there is going to be massive unemployment caused by AI advancement. Simply put, Artificial Intelligence is going to takeover the world by taking over our jobs.



We are beginning to see à trend in this direction with autonomous vehicles alone already threatening millions of jobs. How long before every other job comes under threat?



Aside our jobs, there are also fears from some quarters regards how AI may seemingly control our lives by manipulating our decisions technologically. We have experienced the possibility of such technological manipulation with Facebook-Cambridge Analytica data scandal, even though it was not actively carried out by Artificial Intelligence.

Artificial Intelligence Possible Regulation

While all eyes are on artificial intelligence taking over the world, like it’s happening today. The reality is that the kind of AI with the ability to takeover the world have not been created and we are still far from creating one.



It’s going to take decades before AI becomes a real threat and the distance from now to then will be greatly influenced by REGULATIONS.



AI Development and Deployment will be Regulated

The United States has long been a champion and defender of the core values of freedom, guarantees of human rights, the rule of law, stability in our institutions, rights to privacy, respect for intellectual property, and opportunities to all to pursue their dreams. The AI technologies we develop must also reflect these fundamental American values and our devotion to helping people. Our goal is to ensure that AI technologies are understandable, trustworthy, robust, and safe. Whitehouse.gov



Many may call the above “paperwork’ but I choose to call it a statement, that AI development and deployment will eventually be watched under regulatory eyes. Not to forget that regulations as we have them today at some point were paperworks.



Hence, ignoring the potential influence of regulations on any hypothetical proposition of AI takeover is rather narrow-mindedness.



Final Thoughts

It’s really difficult to say exactly how AI systems will take over the world but if it were up to me to make a statement, I’do put faith in humanity.

Faith in the belief that we (mankind) will navigate pass the danger posed by Artificial Intelligence before it even becomes a real threat.



That not withstanding, we owe ourselves an obligation to approach Artificial Intelligence with caution, a duty to look upon AI development with regulatory eyes and a calling to keep ringing the bell.

Artificial intelligence systems are here to stay and so are mankind.