Written by Greg Otto

The 2015 NFL season kicked off Thursday with the Pittsburgh Steelers playing the New England Patriots. In the week that follows, a team of coaches for both sides will study game tape, breaking down who played well and who didn’t, tweaking their strategy for the next game.

But imagine a system where game tape assessment is reduced to an algorithm. Coaches no longer spend hours pouring over game tape, and players are given reports within 24 hours of completing a game. Starting lineups are determined as an aggregate of these algorithms, freeing up time for coaches to concentrate on other matters. Some assistants are either reduced to writing reports off the system or are entirely out of a job.

This scenario is not that far off, according to Oregon State University professor of intelligent systems Tom Dietterich. A system where computers have enhanced decision-making power and the ability to take action is just one advancement of artificial intelligence that could be a reality in the next five to 10 years.

Dietterich was one of a number of experts speaking about AI’s future Thursday at DARPA’s “Wait, What?” conference. From today’s tools like Apple’s Siri and the Nest smart thermostat to future systems that automatically power entire city systems, scientists and researchers are trying to map a future where AI can harmoniously intertwine with humans.

Also serving as the president of the Association for the Advancement of Artificial Intelligence, Dietterich sees the current landscape of AI breaking into two categories: tool AI and autonomous AI. Tool AI are things that are already part of our everyday lives: personal assistants like Siri, Microsoft’s Cortana or IBM’s Watson. Autonomous AI is beginning to work its way into our lives in systems that manage high-speed trading for hedge funds or Google’s self-driving cars.

It’s this autonomous AI that gives Dietterich some pause. During a plenary session Thursday, Dietterich talked about the risks that come with exploring these systems that deal with what he calls “high-stakes decision-making,” which will have ramifications on world economies and life itself.

“Our physical intelligence is enabling technology for these applications,” he said. “There’s great potential to save lives and make money.”

Some of the concerns Dietterich raised were of the fundamental flaws in software development and the growing risks of cyberattacks that can manipulate how these systems operate.

“Smart software is first of all, software,” he said. “It has all of the problems that software has. It contains bugs and will probably be susceptible to a cyberattack. If we are using machine learning techniques to build software, it can still have errors. For example, a law enforcement application could make a very tragic decision based on misunderstanding the situation.”

He also explained what a cyberattack could look like. Dietterich called it a “training-set poison” attack, where an adversary obtains access to data used to build machine learning algorithms, contaminating them to produce a model profitable or beneficial to the attacker once a system is online.

Not a ‘fact-based fear’

The danger of AI is something leading technology minds have been publicly discussing over the last few months. Bill Gates, Stephen Hawking and Elon Musk have made headlines talking about how AI is one of humanity’s next great dangers.

Experts who spoke Thursday stopped short of sounding the alarm over AI systems taking over the world. Dietterich said some of the systems are already smarter than humans, yet far from able to exercise free will.

“There’s a misconception that artificial intelligence is some sort of threshold phenomena,” he said. “Today, computers are not as smart as people, but one day, they are going to conspire against us and, boom, we are going to have an intelligence explosion, and computers will be vastly more intelligent. There’s no reason to think that artificial intelligence is a threshold phenomena. In fact, our tool AI are already much smarter than us. We wouldn’t use them if they weren’t. Think about it, there is no human library that has the vast spoken knowledge of Google.”

Just because Google is smarter than humans, Dietterich said, doesn’t mean it’s close to being sentient.

“I don’t think there’s any threshold that if Google sees another billion Web pages, it will suddenly become autonomous and decides it wants to delete them or something,” he said.

Hadas Kress-Gazit, an associate professor at Cornell University, echoed Dietterich’s sentiments, saying that a tipping point that begets an AI uprising is unlikely.

“It doesn’t seem like an imminent problem right now,” Kress-Gazit said during a panel discussion. Artificial Intelligence “is gradual and these things are going to gradually enter our lives. We need to be careful of that fear, and things that not necessarily fact-based fear.”

Experts talked about the future of AI being in mixed systems, forms of both tool and autonomous AI working in harmony. Kress-Gazit conceived of an example where Google’s self-driving cars kick the responsibility back to the driver in the face of inclement weather.

“Imagine a self-driving car that says, ‘You have to take over because it’s snowing and my sensing just degrades horribly,’” she said.

Dietterich added, “Part of the solution is to say, ‘I’m not just building an AI system, I’m building a human-and-machine system.'”

But as those systems become more pervasive, the control (or lack thereof) we exert over them will fundamentally shape how much we embrace or repel a future where robots become as commonplace as the smartphones in our pockets.

“I think the danger of AI is not so much in the artificial intelligence itself, but in the autonomy,” Dietterich said. “We should never create fully autonomous systems. By definition, a fully autonomous system is one in which we have no control over. I don’t think we ever want to be in that situation.”