By Michael Brooks

Ava, the questionably conscious AI machine from Ex-Machina — is she? isn’t she? does it really matter?!

It’s now 14 years since I walked through a coffin-maker’s workshop in North London to interview Michael Reiss for New Scientist magazine. At the time, he was the world expert on programming a computer with Artificial Intelligence (AI) to play the Asian board game Go. Reiss’s program was just about to be incorporated into the Playstation 2 (remember that?!) and he was pretty excited.

How times change. My memory of Reiss resurfaced when I read last month’s news about Google’s Go Program annihilating Fan Hui, the European champion.

What makes it so interesting to me is the importance of intuition in the game. The Computer Go experts I spoke to back in 2002 said their big problem with Go is that professionals can’t always explain everything they do.

It’s played on a 19 square by 19 square board with over 10^171 possible layouts. For comparison, there are around 10^80 atoms in the universe; this is a lot more complicated than chess. In the end, they all rely on intuition about the state of play, the patterns on the board and so on.

That’s incredibly hard to program. So if Google’s AI can beat the best human players; does it have something like intuition? And if it uses intuition to achieve its goals, could it be described as conscious?

It’s a shame the Google announcement came the day after I went to Imperial College to meet Professor Murray Shanahan, a cognitive roboticist and one of the world’s foremost experts on AI. I’d love to know what he made of Google’s AI achievements. From our conversation, I suspect he’d have said something like, “Well, it’s complicated.”

And he’d be right.

I was meeting Shanahan because he wrote the book that inspired Ex Machina. This Oscar-nominated film is the subject of the next episode of the Science(ish) Podcast that Rick Edwards and I host on Radio Wolfgang.

In case you haven’t seen it, Ex Machina focuses on a hi-tech entrepreneur who builds intelligent robots. The big question raised by the film is whether his best creation, Ava, is actually conscious.

It became clear during our interview that Shanahan and I disagree on the answer. But more of that in a moment.

There’s a lot of fuss in the media about AI. Everyone has an opinion, it seems — whether or not they actually know anything about the subject. And so you get Stephen Hawking saying things like:

Or Elon Musk, who thinks the prospect of artificial intelligence is:

Whatever the truth of these statements, we certainly need to think a bit harder about AI. In little over a decade, we have gone from a situation where a computer couldn’t beat any professional at Go to where a computer is better than every professional (Google’s AI is widely expected to beat the human world champion next month).

Things certainly aren’t going to stop there, because it’s never been about playing games. Games are a proxy for all kinds of desirable applications. Even back in 2002, researchers were calling for investment in Go-playing AI because it would have spin-offs in automating the analysis of genetic information, satellite images and CCTV footage.

But we didn’t actually need a silicon-based Go champion to achieve much of what was envisaged. AI is already better than doctors at reading medical scans; Pedro Domingos, a computer scientist at the University of Washington, Seattle, has stated that he would rather have a computer handle his diagnosis than a human.

AI has a diverse range of uses besides medicine. There’s Google’s self-driving car, of course. It’s used for fraud detection and decision-making in banking, for speech recognition, robot navigation and computer vision.

But there are some less enticing applications too, which is why Hawking and Musk are right: we have to be careful.

Shanahan is among those who have signed a letter calling for AI to be divorced from weapons technology, for instance. We really don’t want drones making their own decisions about who gets to taste their missile fire.

And there are less dramatic concerns too: AI shouldn’t just be about rich corporations making money; we should ensure that our increasing capabilities create projects that are beneficial to humanity. Murray’s take seems a sensible one:

“As our AI gets more powerful, it becomes more pressing to think about articulating a set of rules and regulations that AI developers would adhere to.”

The trouble is, as Murray pointed out in our interview, we are yet to figure out exactly what these rules and regulations will be.

More pressingly, we don’t even know if it is possible to write moral and value-driven code in a world where people simply don’t all share the same moral values. Worse, philosophers have spent centuries trying to agree on a definition for what a ‘value’ or ‘moral’ is.

If we can’t define them, how can we instil them in our machines?

For some, this is an airy-fairy issue when there are more pressing concerns. As Jeremy Howard, founder of Enclitic, which uses AI for medical diagnosis, told us for the podcast,

“We don’t know if these deep learning inspired AIs will produce true intelligence or consciousness but my answer to that is — ‘Who gives a shit?! They’re going to take our jobs anyway.”

I understand Howard’s point, but I can’t help giving a shit.

I don’t see any reason to doubt that AI will one day be of human-level intelligence. Shanahan told me in our interview that he’s “pretty confident” we’ll have human level AI by the end of the century, adding “I wouldn’t be surprised if it happened sooner.”

And that has deep implications — implications that are raised in Ex Machina.

How should we feel about these creations: do we consider them as organisms or machines? Ava is clearly highly intelligent, but is she conscious? Should she be treated in the same way we treat humans?

One of the central issues AI and consciousness researchers face is what is needed for something to be declared “conscious”. Ava operates with high intelligence, and with purpose, but does she have, say, empathy? Does she understand what is in other people’s minds, and does she care about that beyond its relevance to her own schemes and goals? Is that what we mean by a conscious being?

Discussing this, Shanahan and I quickly fell into problems of generalisation. I know a few people who don’t (or don’t seem to) care about other people’s feelings. They aren’t troubled if they inflict emotional pain on fellow humans, but I would never claim they are not conscious, or don’t deserve human rights. From amoebas to psychopaths, to me, to you, to Pope Francis — we are all somewhere on a complex, multifaceted spectrum of intelligence, consciousness and empathy.

And that is why we need to have a serious conversation about AI research: I think it will be on that spectrum with us. We need to accept that — and work with it at the forefront of our minds.

We spoke to MIT’s Professor Max Tegmark for the podcast, and he said something that I think is particularly wise:

“If we’re sloppy and don’t pay attention then bad things can happen. But on the other hand, if we really do put our best efforts in to preparing for this, AI can create an amazing human future on the earth and perhaps throughout the cosmos as well.”

We will probably never agree on whether Ava and her like will be conscious when they finally arrive. But if we get that journey right, they could still be an unprecedented force for good in the world. It’s a big “if”, but one worth talking about.

Michael Brooks is the co-host of the Science(ish) Podcast with Rick Edwards which unpicks the science within popular pieces of fiction

Follow Radio Wolfgang on Twitter