Neil Jacobstein, Singularity University’s co-chair in AI and Robotics, has been thinking about artificial intelligence for a long time, and at a recent talk at Summit Europe, he wanted to get a few things straight. There’s AI, and then there’s AI.

Elon Musk recently tweeted this about Nick Bostrom’s book, Superintelligence: “We need to be super careful with AI. Potentially more dangerous than nukes.”

AI has long been a slippery term, its definition in near-constant flux. Ray Kurzweil has said AI is used to describe human capabilities just out of reach for computers—but when they master these skills, like playing chess, we no longer call it AI.

These days we use the term to describe machine learning algorithms, computer programs that autonomously learn by interacting with large sets of data. But we also use it to describe the theoretical superintelligent computers of the future.

According to Jacobstein, the former are already proving hugely useful in a range of fields—and aren’t necessarily dangerous—and the latter are still firmly out of reach.

The AI hype cycle has long been stuck at the stage of overpromise and underperformance.

Computer scientists predicted a computer would beat the world chess champion in a decade—instead it took forty years. But Jacobstein thinks AI is moving from a long period of disappointment and underperformance to an era of disruption.

What can AI do for you? Jacobstein showed participants a video of IBM’s Watson thoroughly dominating two Jeopardy champions—not because folks haven’t heard about Watson, but because they need to get a visceral feel of its power.

Jacobstein said Watson and programs like it don’t demonstrate intelligence that is “broad, deep, and subtle” like human intelligence, but they are a multi-billion dollar fulcrum to augment a human brain faced with zettabytes of data.

Our brains, beautiful and capable as they are, have major limitations that machines simply don’t share—speed, memory, bandwidth, and biases. “The human brain hasn’t had a major upgrade in over 50,000 years,” Jacobstein said.

Now, we’re a few steps away from having computer assistants that communicate like we do on the surface—speaking and understanding plain english—even as they manage, sift, and analyze huge chunks of data in the background.

Siri isn’t very flexible and still makes lots of mistakes, often humorous ones—but Siri is embryonic. Jacobstein thinks we’ll see much more advanced versions soon. In fact, with $10 million in funding, SIRI’s inventors are already working on a sequel.

And increasingly, we’re turning to the brain for inspiration. IBM’s Project SyNAPSE, led by Dharmendra Modha, released a series of papers—a real tour de force according to Jacobstein—outlining not just a new brain-inspired chip, but a new specially tailored programming language and operating system too.

These advances, among others highlighted by Jacobstein, will be the near future of artificial intelligence, and they’ll provide a wide range of services across industries from healthcare to finance.

But what of the next generation? A better understanding of the brain driven by advanced imaging techniques will inspire the future’s most powerful systems: “We’ll understand the human brain like we understand the kidneys and heart.”

If you lay out the human neocortex, the part of the brain responsible for higher cognition, it’s the size of a large dinner napkin. Imagine building a neocortex outside the confines of the skull—the size of this room or a city.

Jacobstein thinks reverse engineering the brain in silicon isn’t unreasonable. And then we might approach the kind of superintelligence Musk is worried about. Might such a superintelligent computer become malevolent? Jacobstein says it’s realistic.

“These new systems will not think like we do,” he said, “And that means we’ll have to exercise some control.”

Even if we don’t completely understand them, we’re still morally responsible for them—like our children—and it’s worth being proactive now. That includes planning diverse, layered controls on behavior and rigorous testing in a “sand box” environment, segregated and disconnected from other computers or the internet.

Ultimately, Jacobstein believes superintelligent computers could be a great force for good—finding solutions to very hard problems like energy, aging, or climate change—and we have a reasonable shot at these benefits without realizing the risks.

“We have a very promising future ahead,” Jacobstein said, “I encourage you to build the future boldly, but do it responsibly.”

Image Credit: Shutterstock.com