For something that does not yet exist, artificial intelligence (AI) has managed to cultivate more terror among human beings than Spielberg did about going for a dip in the ocean. The cult of fear that surrounds AI is not limited to those of the tin-foil hat club either. Some of the world’s brightest minds have offered their two cents on the potential dangers of bringing AI into existence, something which has helped foster an irrational level of anxiety among the public over a technology that has the capacity to usher in a new age of discovery for mankind.

Those waving the red flag claim this technology, once created, will have the power to dramatically reduce the size of the world’s labour force; stealing jobs from people across the career spectrum. Those out of work will be left wondering what it all means in a world without purpose, as artificial beings replace human ones in every area of life. Until, after permeating all facets of society, the machines rise up and destroy mankind once and for all. Scary stuff when you put it like that.

But the thing about fear, as the Indian philanthropist Jaggi Vasudev explains “is [it is] always about that which does not yet exist”. That is not to say there aren’t potential pitfalls that should be considered and taken seriously, as the case should be when developing any new technology. But whether someone chooses to look upon AI with trepidation or not depends far more on the philosophical beliefs held by the individual than any inherent evil that could potentially exist inside machines.

No matter how insane it may be to fear the non-existent, it has not stopped humanity seeing AI as something to be afraid of. In fact, a report by the Global Challenges Foundation has put the technology alongside nuclear war, climate change, super-volcanic eruptions and other potential “risks that threaten human civilisation”. However, the report cannot seem to make up its mind, suggesting on the one hand, that, in the future, machines and software in possession of “human-level intelligence” may wreak havoc against humanity, but that equally, the technology could open up a whole new world – one that could help offset most other catastrophic risks and where access to scientific breakthroughs once thought out of reach could be within man’s grasp.

“Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations”, propose the report’s authors, Dennis Pamlin and Stuart Armstrong. “And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans.”

50% Of jobs will potentially be automatable within two decades

God’s image

The idea that a super intelligent AI would wish to obliterate an entire species is based on the assumption an advanced intelligence would care about us enough to hit the kill switch. “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant”, wrote theorist Benjamin H Bratton in an op-ed piece for The New York Times. “Worse than being seen as an enemy is not being seen at all.”

Is it not more plausible to imagine this scenario, rather than one where an AI sees man as some sort of threat worthy of being disposed of? If anything, the notion that a highly intelligent life form, artificial or otherwise, would perceive us as a challenge worth vanquishing says more about man’s own arrogance than it does the possible motivations of a sentient supercomputer.

“That we would wish to define the very existence of AI in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism”, wrote Bratton. “The legacy of that conceit helped to steer some older AI research down disappointingly fruitless paths, hoping to recreate human minds from available parts. It just doesn’t work that way.”

Technological obsolescence

A much more likely scenario than being murdered by machines in our sleep is the prospect of being rendered obsolete by our new robot overlords. As the report by the Global Challenges Foundation warns, the risk that “economic collapse may follow from mass unemployment as humans are replaced by copyable human capital” is completely within the realm of possibility. Indeed, it is already happening.

Many assembly line workers have had to come to terms with the fact there are machines out there that can do their jobs quicker, more safely and more efficiently – not to mention the fact robots don’t tend to take sick leave.

But factory workers are not the only people who are going to feel the squeeze of mechanisation. Any worker whose role entails a routine, task-driven activity will one day have to contend with technology – and they are going to lose. With the advent of Google’s driverless car, not to mention the accessibility of drones, which will soon be capable of self-piloting, it won’t be long before huge swathes of truck drivers, delivery boys and bike couriers are out of a job too.

But don’t think this is just an issue for low-skilled workers, and that those cushy middle-income jobs are far too complicated for machines to master. Computers will be capable of outperforming us in a wider variety of far more complex tasks. Realistically, the only jobs in which we will be able to trump machines will be those that require social intelligence (e.g. persuasion and negotiation) or creative intelligence (which is simply the ability to imagine new ideas). Nearly 50 percent of jobs are potentially automatable within a decade or two.

Combine mechanisation with a super-intelligent AI and you are looking at something with the potential to make human beings redundant across the board. Even one of the most influential economists in the world is concerned about the threat of technological obsolescence. “People are scared – and they are right to be scared”, said Robert Shiller at the World Economics Forum in Davos. “Artificial intelligence is coming, and it will replace your job.”

No work no sweat

So the machines are coming to steal everyone’s jobs. Sounds reminiscent of the rhetoric employed by nationalist political parties when trying to stir up fear of immigrants. The thing is, the vast majority of the jobs taken by immigrants are low-level roles most nationals don’t want to do anyway. The same can be said of automation.

Take Google’s driverless cars and Amazon’s plans for delivery drones for example. Yes, they will render many people jobless, forced to retrain or find another career path, but, in the grand scheme of things, autonomous machines doing the work of human beings is a net positive for society.

Remember that cities are responsible for 80 percent of all greenhouse gas emissions and that one of the biggest challenges facing the planet is how climate change can be combatted so mankind can avoid a catastrophic event. The innovative application of AI could help reduce the size of cities’ carbon footprints, especially if the machinery used is fossil fuel free.

The technology would have the added benefit of dramatically reducing the congestion seen on city streets and remove human error from the equation, helping to reduce the 3,287 road-related deaths that occur each day. Not only would that save lives, it would also reduce the strain on A&E departments around the world.

Yet, there are those who worry what the planet, and more importantly, the economy would look like, if machines were to take over the workplace. Much of this concern comes from people worrying about their role in a meaningless universe, deprived of purpose. A world without traditional jobs is one without full employment or money – and that scares the hell out of people. But it needn’t. The anxiety about such a future is the result of individuals, politicians and businesses looking at the problem through the current economic prism.

Why not adapt the economic model to suit the evolving landscape as advanced technologies begin to emerge, become viable and start remoulding the world around them? The concept of unconditional basic income (UBI) has gained a small amount of traction in recent years, but failed to take hold because there is not yet the imperative to accept such a radical idea. People still have jobs; unemployment, although high in many countries, has not reached crisis point, and there is still a real need for human workers. However, in a world of machines, concepts such as UBI could help prevent an economic collapse and provide security to people as mankind makes the transition into a new world.

The ancient Egyptians were able to develop an advanced culture because they were one of the first peoples to practice large-scale agriculture. It is impossible to develop language, plan great feats of architecture or do much thinking about anything at all when you are constantly worrying where your next meal will come from. By using AI and automation to free ourselves from the shackles of traditional work, the acquisition of wealth and the consumption race, we would be able to unlock the full creative potential of mankind on a scale impossible or even imaginable before.

People are fearful of AI because they cannot envisage their place in a world where it exists. What mankind can do is try to picture the type of world that we want to live in, so AI can help us get there.

Exercising caution

It is important we develop AI to be friend not foe. One man working on that problem is Luke Muehlhauser, Executive Director of the Singularity Institute in Silicon Valley. “Anything intelligent is dangerous if it has different goals than you do, and any constraint we could devise for the AI merely pits human intelligence against superhuman intelligence, and we should expect the latter to prevail”, he told Wired. “That’s why we need advanced AIs to want the same things we want. So friendly AI is an AI that has a positive rather than negative effect on human beings. To be a friendly AI, we think an AI must want what humans want. Once a super intelligent AI wants something different than what we want, we’ve already lost.”

That poses the question of what it is man wants. Whatever it is, it’s important to think long and hard about it before unleashing AI into the world. However, unsurprisingly, there is very little discussion being had about the subject – and that is what bothers people such as Stephen Hawking.

In a piece for The Telegraph, Hawking wrote: “Facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades’, would we just reply, ‘OK, call us when you get here – we’ll leave the lights on’? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”

Unfortunately, history has shown that forethought about the wider ramifications of developing new technology is rare. Even when it is exercised, we tend to throw caution to the wind, preferring to take a chance at moving forwards, instead of standing still. AI should not be looked at with fear, but with a sense of caution, as the only thing more frightening than taking a chance is not taking it at all.