And there's a good framework for doing that. Forty years ago, there were visionaries who saw both the promise and the peril of biotechnology, basically reprogramming biology away from disease and aging. So they held a conference called the Asilomar Conference at the conference center in Asilomar, and came up with ethical guidelines and strategies---how to keep these technologies safe. Now it's 40 years later. We are getting clinical impact of biotechnology. It's a trickle today, it'll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. It's a good model for how to proceed.

It's only continued progress particularly in AI that's going to enable us to continue overcoming poverty and disease and environmental degradation while we attend to the peril.

We just had our first Asilomar conference on AI ethics. A lot of these ethical guidelines, particularly in the case of, say, biotechnology have been fashioned into law. So I think that's the goal. It's the first thing to understand. The extremes are all, "Let's ban the technology," or "Let's slow it down." That's really not the right approach. Let's guide it in a constructive manner. There are strategies to do that, that's another complicated discussion.

NT: You can imagine some rules that Congress could say that everyone working on a certain kind of technology has to make her data open, for example, or has to be willing to share his data sets and at least to allow competitive markets over these incredibly powerful tools. You can imagine the government saying, "Actually, there's going to be a big government-funded option that we're going to have, kind of like OpenAI, but run by the government." You can imagine a huge national infrastructure movement to build out this technology so at least people with public interest at heart have control over some of it. What would you recommend?

RK: I think open-source data and algorithms in general are a good idea. Google put all of its AI algorithms in the public domain with TensorFlow, which is open source. I think it's really the combination of open source and the ongoing law of accelerating returns that will bring us closer and closer to the ideals. There are lots of issues, such as privacy, that are critical to maintain, and I think people in this field are generally concerned about these issues. It's not clear what the right answers are. I think we want to continue the progress, but when you have so much power, even with good intentions there can be abuses.

NT: What worries you? Your view of the future is very optimistic. But what worries you?

RK: I've been accused of being an optimist, and you have to be an optimist to be an entrepreneur because if you knew all the problems you'd encounter you'd probably never start any project. But I have, as I say, been concerned and written about the downsides, which are existential. These technologies are very powerful and so I do worry about that, even though I'm an optimist. And I am optimistic that we'll make it through. I'm not as optimistic that there won't be difficult episodes. World War II, 50 million people died and that was certainly exacerbated by the power of technology at that time. I think it's important though for people to recognize that we are making progress. There was a poll taken of 24,000 people in 26 countries recently. It asked, "Has poverty worldwide gotten better or worse?" Ninety percent said, incorrectly, that it's gotten worse. Only one percent said correctly that it's fallen by 50 percent or more.

NT: What should the people in the audience do about their careers? They're about to enter a world where the career choices are career choices mapped onto a world with completely different technology. So in your view, what advice to give the people in this room?

RK: Well it really is an old piece of advice, which is to follow your passion because there's really no area that's not going to be affected or that isn't a part of this story. We're going to merge with simulated neocortex in the cloud. So again we'll be smarter. My view is not that AI is going to displace us. It's going to enhance us. It does already. Who can do their work without these brain extenders that we have today. And that's going to continue to be the case. People say, "Well, only the wealthy are going to have these tools," and I say, "Yeah, like smartphones, of which there are three billion." I was saying two billion, but I just read the news and it's about three billion. It'll be six billion in a couple of years. That's because of the fantastic price-performance explosion. So find where you have a passion. Some people have complex passions that are not easily categorized, so find a way of contributing to the world where you think you can make a difference. Use the tools that are available. The reason I came up with the law of accelerating returns literally is it was to time my own technology projects so I could start them a few years before they were feasible---to try and anticipate where technology is going. People forget where we've come from. Just a few years ago, we had little devices that looked like your smartphone, but they didn't work very well. So that revolution, and mobile apps, for example, hardly existed five years ago. The world will be comparably different in five years, so try to time your projects to meet the train at the station.

Audience Question: So much of the emphasis has been on the lovely side of human nature, on science and exploration, and I'm curious about the move more toward our robot partners. What about the dark side? What about war and war machines and violence?