Artificial intelligence is no longer a product of science fiction, it already exists – and it’s developing fast. Some of the world’s greatest minds, like Stephen Hawking, Elon Musk, and Bill Gates are warning about the dangers that the AI revolution may bring. So what do we have to fear as the age of intelligent computers dawns? With cyber-tools evolving to merge with the human body, can AI lead to the next step in the evolution of humankind? We ask a leading scientist in the field of artificial intelligence, robotics engineer Ben Goertzel.

Follow @SophieCo_RT

Sophie Shevardnadze: Ben Goertzel, robotics and artificial intelligence researcher, welcome to the show, great to have you with us. Ray Kurzweil from Google is claiming AI will enhance us as humans. Tesla CEO Elon Musk says that people must merge with machines or face becoming irrelevant. So is this the next step in evolution - first we go from apes to humans, and then we become androids?

Ben Goertzel: Well, I think there's going to be a lot of possibilities open to us as advanced technology unfolds. One of these possibilities will be for us simply to upload our minds into a digital or quantum computing substrate and just become transhuman minds leaving the human body behind all together. Another opportunity will be to hybridize with machines, I mean this cell phone that we're holding in our pockets can go in our head and we can effectively become cyborgs. This can have all kinds of amazing consequences for our minds and bodies. Another possibility will be to basically remain in the same human form but to do away with annoying features of our current life like death and disease and mental illness and so forth. There's going to be a spectrum of possibilities and it's going to be much more exciting then human life has been heretofore.

SS: So if humans will be able to upgrade their intelligence, as you say, what’s it going to look like? I imagine plugging in a chip and being able to speak Chinese all of a sudden?

BG: Being able to learn new languages by simply plugging a chip into your head, having at your mind's immediate grasp the full power of the Internet, of a calculator - these will be some of the consequences but it will go way beyond that. We will be able to network our brains into each other's brains and into the brains of AIs which in essence will let us to go beyond our current existence as individual isolated humans and become part of a sort of fused group biological surrogate minds.

SS: Ben, do you really think a human mind can handle all that? I mean the fastest growing disorder in the 21st century is mental disorder - all over the world. And that’s because there’s so much information. Do you really think that we can handle that? I think we’re going to go berserk if that happens.

BG: I think the young generation will be able to handle these changes especially well and some of us who are not so young anymore but with open minds ready to embrace these possibilities should have an easy time. I do think there will be some significant adjustment problems for older people who don't want to give up the traditional way of living. And I hope that all of these possibilities will be optional so that if humans want to remain in their same old bodies, if they don't want to plug into the digital matrix, if they want to get sick and die, rather than availing themselves of the new possibilities, I think they should be allowed to. But by the same token I think the people who want to expand their minds and bodies and grow in an unbounded way to embrace all sorts of new possibilities should also be allowed to do so.

SS: Ben, you’ve said before that a groundbreaking technological leap is coming in the next decade. But our current tech - like processors - appears to be reaching its limit, it’s no longer developing as fast as it did. So where will this new breakthrough come from?

BG: I think that the biggest breakthrough that lies ahead is the transition from narrow AI to AGI - artificial general intelligence. Today's AIs are good at solving highly particular problems like driving a car or playing chess or predicting the stock market. Once we create a general purpose AI that can confront new problems that it wasn't explicitly programmed or trained for this general purpose AI is going to accomplish one after another task that only humans can now do. And the general purpose AI will set itself the task of improving itself. And I.J. Good, the mathematician, foresaw this in 1965 when he said the first intelligent machine will be the last invention that humanity needs to make.

SS: During the path of AGI development, you said that at first, things will get bad. How bad - and what kind of bad are we talking about here? Will that stall any progress, make humans scared of robotics?

BG: Some people will always be afraid of progress but that doesn’t seem to stop progress from happening. When cell phones first came out people complained a lot that they were - that they would destroy normal social interaction, people would always be staring at their phones and the reality is more complex than that. People are staring at their phones but they’re mostly messaging with each other and carrying out new forms of social interaction. In the same way people will be afraid of these new technologies, but once they’re available they will embrace them and then they will use them in their own way, for their own purposes. Fear always accompanies progress but human nature is to embrace progress and move forward anyway. That’s why we’re not living in caves anymore.

SS: You’re saying that smart robots will eventually run world governments - are humans really going to be comfortable with that? Who’s going to obey robots, where are they going to get legitimacy?

BG: I'm not sure that most people are comfortable with their current governments, they put up with them because they don't have much choice. I suspect AI governments are going to be much more rational and much more compassionate than the current governments that we have. I suspect people will be more happy with that new situation than they are with the with the current one.

SS: But the thing with computers, despite all the advancements, they can make mistakes. They freeze - my computer freezes on me all the time, they malfunction, or do something stupid. Where’s the guarantee that robots who decide things for humans will be flawless? What if a robocop shoots me by mistake?

BG: I don’t think we’re going to achieve flawless machines. Perfection - it is probably a pipe dream. On the other hand just as humans are not the fastest running or the highest jumping creatures on earth, there’s no reason to believe we’re necessarily the smartest things that can be created. We’re not the upper limit of intelligence and we’re not the upper limit certainly of kindness, compassion and ethics. AIs don’t need to be perfect to exceed humans in awful lot of dimensions. Certainly there's going to be problems as intelligent machines are rolled out and progress further and further and take on more and more roles in society. There is always going to be some kind of problem. I would bet the actual problems that happen or not going to be the ones we foresee now, just like before computers and the Internet was created no one foresaw botnets, computer viruses in the actual issues that we have now. And I’m personally an optimist and I believe that the good is going to outweigh the bad, as I believe it has generally with the development of civilisation.

SS: Ben, but the biggest problem for me in all of this is accountability. Who are you going to hold accountable if something goes wrong with a robot? I mean when a human makes a mistake you can put him on trial, put him in prison, I don’t know, execute him. What are you going to do with a robot? How are you going to hold him accountable for a mistake he’s making?

BG: The legal system that we have now is predicated on the idea that humans are the only intelligent beings with moral accountability. Obviously this is going to have to change. The legal system will have to embrace AIs and robots as legal persons which will come along with things like, perhaps, a right to vote for robots as well as the ability to hold a robot accountable if it does something wrong...

SS: So are you going to put a machine in prison?

BG: Machines will be able to reprogram their own minds and if necessary other machines will be able to fix the mind of a machine that's gone awry. Right now we can't go into the mind of a human sociopath, adjust them to make them not be dangerous, so we have to lock them up or kill them. These methods are very crude. With an AI we have source code access to their minds so we can adjust their hardware, they can adjust their own hardware. Because of this the incidents or problems are going to be far less. It's really a very crude situation that we're in now. Where running on these machines that we don't understand and that we can't reprogram.

SS: Even a toaster these days can be connected to the internet of things, and it immediately becomes hackable. So is relying on robots with brains even more dangerous, since they can also be hacked?

BG: Security is going to be a serious problem as more and more things are connected to the Internet and more and more are controlled by intelligent machines. Ultimately it may become that radical transparency is the best solution everyone has the ability to watch everything else, what the futurist Steve Mann called 'sousveillance' - everyone watching everything. If you have that situation then it's much harder for people to get away with stealing things and wreaking havoc since someone will see them. And I think that we're already seeing that trans now as the traditional notion of privacy is on the decline.

SS: Big names in science, such as Stephen Hawking, Elon Musk, Apple co-founder Steve Wozniak signed an open letter, warning against artificial intelligence used in the military. But battle robots are already used in war - so if AI can get as powerful as you claim it can be, will wars become immensely more destructive than they are now?

BG: It seems that overtime in human history wars have overall decreased in incidents and then the percent of the population killed in wars keeps going down. And furthermore democratic nations quite rarely go to war with each other, so I don't actually think we're going to see an explosion of more and more wars, I think we’re going to see a more democratic and a more peaceful world. But I agree that’s a risk and this is why I think it's important that the most powerful AIs are developed as free and open source software for the good of everyone in the world rather than developed as the exclusive province of some government’s military or of some large corporation.

SS: Now the only thing we can model artificial intelligence on is our own human brain, but with all its flaws and issues and limitations - is it a good model to follow? Princeton scientists are already saying AI can quickly become biased - racist, sexist…

BG: There are many different approaches to the creation of advanced AI systems. Some people are trying to model the brain, but not many. That’s not a common approach in the AI field. Even the neural networks that are widely used now for things like image processing are very very loose models of parts of the human brain. My own work with OpenCog artificial general intelligence systems isn’t really based on the human brain, it’s based more on mathematical principles of intelligence. Just as cars and aeroplanes and submarines and so forth - they’re not based all that closely on how biological animals do things. They’re based on principles of engineering and mathematics and only loosely inspired by biological systems. So I think AIs don’t have to emulate the human brain and that’s a good thing, because they can be smarter than us and kinder than us and they can be open-minded and communicate in different ways than we can.

SS: What kind of industry will be the driving factor behind robotics evolution? The military? Facebook & Google? Porn?

BG: Robotics in particular I predict will be driven by the toy industry. The toy industry which is centred largely here in South China where I’m sitting right now is much better at creating complex consumer electronics at low cost and large scale than anyone else. And furthermore toys can afford to be a little silly and learn as they go, whereas military robots must have a very high reliability. So actually what we’re aiming to do with our robots at Hanson robotics is to roll out toy robots that will be like young AGI children that learn from the kids who are playing with them, grow up with the children who own them. This can built an emotional and human bond between humans and robots and give a gentle way for robots to grow up, learn common sense knowledge and learn human values.

SS: So talking about toys, there are people already more attached to sex dolls than to real people. If sex robots become a real thing, will this ruin romance? It’d be so much easier to feel loved when it’s programmed and you don’t have to do anything for it…

BG: Romance has changed greatly throughout the human history. Until the last century or two most marriages were not based on romance and in large parts of the world arranged marriage is still the norm. So I think love and romance are going to change over and over again as humanity evolves. And exactly what impact sex robots have is hard for me to say. I have to say it’s not incredibly appealing to me personally because I value the emotional bond I get with another human. Because I know my wife as a human and she knows I’m a human. And there’s a certain bond you get from that commonality. But that’s me. If some people would prefer a sex robot that doesn’t really bother me in particular. I’m all for freedom and people having the right to use or not use each new technology as it comes about.

SS: So tell me something, can the church fight AI the same way it fights cloning?

BG: I think religion has a lot of power to help the human mind grow and to help create social cohesion in groups, but you know religion has proved to be very adaptive over time. And if you look in the Bible or the Koran we don’t take literally a lot of things said there now. I mean the Christian church adapted itself to birth control and even abortion reasonably well. And I think religion will adapt itself to new technology as they emerge, just as it’s been doing.

SS: Your company Aidyia is a hedge fund that’s run by artificial intelligence - from what I understand, it makes predictions about market trends and invests accordingly - and it's doing pretty well. But if you’re already earning money thanks to AI, why isn’t everybody lining up for the tech? The more people have it, the less advantage it’s going to bring you, so are you trying to keep it secret?

BG: Yes, financial prediction using AI is - it’s definitely the wave of the future and I think within ten years a tremendous amount of the money on all major markets is going to be traded by AIs in one form or another. In Aidiya we created some trading strategies that are reasonably successful at the scale at which they’re traded now. But the specific strategies that we’re now using, there’s a limit to how much we can trade using those. We can’t trade trillions of dollars with them, so we’re working newer and better strategies that will be more and more scaleable. And a lot of other people are too, but there’s a diversity of different AI approaches that can be used on the markets, just like there’s a lot of different approaches to fundamental trading and traditional quantitative trading.

SS: Can we create an artificially intelligent robot that’s based on a real person - like cloning, but for the mind? Would that in essence be digital immortality?

BG: It should be possible via scanning the human brain in great detail to map out the neural circuits that make a person be themselves and then replicate that neural structure in some sort of computer - a digital computer, a quantum computer, some sort of synthetic biology nano computer. Once we’ve done that we will have a synthetic simulation of a specific human being. And then we have an interesting philosophy problem or a few. Is the simulation of me, is that really me or is that just something that’s acting like me? And more interestingly - does that simulation of me feel like me, does it have the same type of experiences and sensations as I do, or is it just a sort of philosophical zombie that impersonates me, has no true experience? These are questions that are going to be intriguing to explore. Right now these are philosophy questions, but as we develop new technologies these are going to become science questions and part of the new science of the mind. So there’s amazing new avenues of discovery that we’re going to be exploring.

SS: Exactly,if the human brain is copied on some kind of a hard-drive, what about emotions and feelings? Isn’t immortality a kind of torture without being able to feel a single thing?

BG: If you couldn’t feel anything you couldn’t be tortured or happy. You wouldn’t feel anything. I think that immortality whether in a human body free of disease and suffering or in a robot body to which the mind had been uploaded, immortality will be a great joy to many people, including myself. But if someone gets sick of living and finds it to be a torture, I’m not in favour of compulsory immortality. As I keep saying, I think people should have the freedom to leverage these advanced technologies or not as befits their own taste. But there’s a lot of things to explore in this universe. And as far as I’m concerned 80 or 90 years of life is not nearly enough to learn everything there is to be learned, travel everywhere there is to be travelled and grow in every way that’s possible.

SS: It is great, if big science minds can live longer and create, develop, invent more. But what about another side of this issue - tyrants and dictators have always been dreaming of defeating aging and living forever - what might happen if it becomes possible for them?

BG: I think dictators are on their way out and democracy is on the rise. And the advance of AI that reduces material scarcity and improves communication between people and other minds, this will help the rise of democracy and the increase of a peaceful and positive world for everyone.

SS: Alright, thank you Ben very much for this interesting insight into the crazy world of artificial intelligence.

BG: Ha-ha,who you calling crazy?

SS: I still can’t catch up with it, but I hope with time I’ll embrace it like you do, I just need more time. Anyways, thanks a lot for this interview. We were talking to Ben Goertzel, artificial intelligence and robotics pioneer, entrepreneur, discussing the rise of thinking robots and what that means for humanity in the near future. That’s it for this edition of SophieCo, I’ll see you next time.