UC Berkeley professor Stuart Russell on the dramatic changes he believes AI will bring about and the thorny problem of making sure smart machines have our interests at heart.

Video: How to tell the difference between AI, machine learning, and deep learning Watch Now

As the man who co-wrote the definitive textbook on artificial intelligence, Stuart Russell is well qualified to speculate about the future of AI.

The UC Berkeley computer science professor is confident that the field will continue to advance at a breakneck pace. With the prospect that computers and robots will become as smart as humans, he says it's time to begin working out how to get these intelligent machines to share our values.

Progress in artificial intelligence is accelerating rapidly, said Russell, as evidenced by the Google DeepMind machine AlphaGo teaching itself to play the notoriously complex game of Go, to a standard where it was recently able to best the world champion Lee Sedol.

"A year ago the leading expert on computer Go programming predicted it would take another decade to beat the world champion," he said, speaking at the Strata + Hadoop World conference in London.

"There was a really rapid progression, from programs that couldn't even challenge a professional Go player about two years ago to where they've now beaten a world champion."

In future, AI will increasingly help us live our lives, he said, driving our cars and acting as smart virtual assistants that know our likes and dislikes and that will manage our day.

Perhaps the biggest change will be to search engines, said Russell, which will move from offering up millions of pages that might contain the answer we want to directly answering questions on a wide range of topics, using knowledge gleaned from reading and understanding those pages.

AI technology is already being used to build smart systems that transcend the capabilities of earlier software. Russell gave the example of researchers at UC Berkeley using probabilistic programming to construct an AI system that helps spot clandestine nuclear explosions. The software will serve as the official monitoring system for enforcing the United Nation's global Nuclear Test Ban treaty.

"It's so powerful that the core of the system is half a page of code and it produces a monitoring system that's two to three times more accurate than the previous system, developed using a century of research in seismology."

The path to utopia and dystopia

Russell was bullish in his predictions for AI's eventual capabilities, predicting they would eventually reach the level of general intelligence and surpass those of humans.

"Looking further ahead, it seems there are no serious obstacles to AI making progress until it reaches a point where it is better than human beings across a wide range of tasks."

Once systems can learn to do "pretty much anything", said Russell, they will have an advantage over humans, in that "they can read everything human race has ever written" and "project further into the future, as AlphaGo does when it's playing Go", so will eventually be "able to make better decisions than us".

"This is a really important prospect and the upside is enormous," he said.

"Everything we have, our civilization, results from us being intelligent. If we have access to something that amplifies our intelligence dramatically it can only help us achieve much more.

"We could solve these problems that have so far been resistant to the best minds in the human race, like war, disease, poverty and the destruction of the environment.

"We could reach a point, perhaps this century, where we're no longer constrained by our difficulties in feeding ourselves and stopping each other from killing people, and instead decide how we want the human race to be."

But that scenario is based on the best possible application of the technology. There's also a hefty potential downside to AI, said Russell, where people abuse the huge increase in the capabilities AI will afford civilization.

Fully autonomous weapons, killer robots as Russell described them, are an "imminent threat" to human security, he said, "because they can be used as weapons of mass destruction".

"Five guys with enough money can launch 10 million weapons against a city," he said.

Other pernicious uses include automated surveillance on a scale not possible in the past, as well as what he called "automated persuasion", which he described as:

"The ability of AI systems, through very carefully targeted interactions with individual human beings, to persuade them to adopt a particular political viewpoint or buy particular products. This could have a serious negative effect on our society."

Increased adoption of smart systems and robotics could also lead to the destruction of many of the jobs people rely on today, he said, adding that societies don't yet have a "serious plan" for how to handle this outcome.

How to stop robots from cooking your cat

Building machines that can think for themselves but that still have our best interest at heart will be challenging, warned Russell.

"A system that's superintelligent is going to find ways to achieve objectives that you didn't think of. So it's very hard to anticipate the potential problems that can arise."

In particular, Russell cautioned against building a smart system to pursue a set of goals without giving it the capacity to understand what actions are acceptable to achieve those goals.

"Imagine you have a domestic robot. It's at home looking after the kids and the kids have had their dinner and are still hungry. It looks in the fridge and there's not much left to eat. The robot is wondering what to do, then it sees the kitty, you can imagine what might happen next.

"It's a misunderstanding of human values, it's not understanding that the sentimental value of a cat is much greater than the nutritional value."

To counter these dangerous misunderstandings, Russell says there will be a need to equip AI with a common sense understanding of human values. To this end, he suggests the only absolute objective of autonomous robots should be the maximising the values of humans as a species. In this way the machines would be mindful of societal values while performing tasks, so they will fetch a coffee when asked but not mow down anyone who gets in their way.

Intelligent systems and robots could accrue understanding of human values over time, through their shared observation of human behaviour in the present day and that recorded throughout history, he said. Russell suggested that one method that robots could use to gain such an appreciation of human values could be via inverse reinforcement learning.

Such an approach would also require the robot to be able to resolve which of the contrasting values of different groups and individuals should take precedence, he said. Each intelligent robot should also be designed to accept a level of uncertainty about what human values are, allowing it to ask for guidance and correction of its actions, including allowing itself to be switched off.

Russell was the first signatory of an open letter calling for AI researchers to work on developing systems that are "beneficial" and "do what we want them to do". The letter went on to attract signatures from leading AI researchers at Facebook, Google and Microsoft.

One of those signatories was the billionaire PayPal co-founder and entrepreneur Elon Musk, who is funding research into "keeping AI beneficial". Russell is now among those embarking on a 20-year research project to study how to build intelligent machines whose value systems are aligned with those of the human race.

"The first thing we have to do is to get AI as a field to understand that this is an important question. In the same way that civil engineering understands that when you build a bridge, that bridge is not supposed to fall down. It's intrinsic to what a civil engineer does, it should be intrinsic to what an AI person does."

Also see