During his eight years in office, President Barack Obama has seen hackers grow into a threat no president has faced before. US intelligence and law enforcement agencies have responded to everything from a Chinese hack of Google in 2009 to Russian digital meddling in this election. He’s learned, as a result, to think a few moves ahead. And that includes preparing for possibilities that others might consider science fiction—like the possibility of an artificial intelligence trained through machine learning and tasked with stealing US nuclear codes.

In an exclusive interview with MIT Media Lab director Joi Ito and WIRED Editor-in-Chief Scott Dadich, Obama discusses the possibilities—and possible dangers—of AI. In an era when hackers can steal the fingerprints of 5.6 million federal employees and or pull off a modern version of Watergate, he wonders whether sophisticated adversaries might use AI to infiltrate the government’s most sensitive systems.

“There could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles,'” Obama says. “If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems.”

This notion of an artificially intelligent hacker or hacking tool is more than prognostication. The Pentagon’s Defense Advanced Research Projects Agency is developing AI software for both offense and defense. During its Darpa Grand Challenge competition at the Defcon hacker conference this summer, the agency pitted AI systems against each other to find, exploit, and patch software security vulnerabilities in real time.

Obama argues the potential for an AI attack doesn’t represent a cyber doomsday. But it does require strengthening America’s defenses against all hackers, human and bot. “My directive to my national security team is, don’t worry as much yet about machines taking over the world,” he says. “Worry about the capacity of either nonstate actors or hostile actors to penetrate systems. In that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who deploy these systems are going to be a lot better now.”

The sort of machine learning-based AI that Obama warns about already has been deployed in the cybersecurity world—albeit for defense, not attack. Developers have long used machine learning—the “self-teaching” process of training a piece of software with example data—to hone spam filters and spot malware. A truly automated offensive hacking AI that represents a serious threat to robustly defended systems, on the other hand, probably remains years away. But researchers have outlined how someone might use machine learning to attack those machine learning-based systems, and the Darpa Grand Challenge showed for the first time that computers can use AI to hunt down hackable bugs in adversaries’ code and use them to compromise another machine.

In his WIRED interview, the president argues for thinking of cybersecurity not in terms of perimeter defenses against hackers, but an immune system that eliminates a threat from within. The cybersecurity industry increasingly prescribes focusing on response, not prevention—keeping sophisticated hackers out of a network altogether has become all but impossible. “Traditionally, when we think about security and protecting ourselves, we think in terms of armor and walls,” he says. “Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there.”

Obama compares that approach with another threat to national security: a pandemic. “You can’t build walls in order to prevent the next airborne lethal flu from landing on our shores,” he said. “Instead, what we need to be able to do is set up systems…to make vaccines a lot smarter.”

But all of that, Obama is careful to note, applies to the threat of a specialized AI that functions as a highly evolved tool. It doesn’t cover the more futuristic case of a generalized AI that represents a fully autonomous mind with its own will and motives—possibly ones that don’t align with humanity’s. For that Skynet-type threat, Obama has a less subtle prescription: “You just have to have someone close to the power cord,” he jokes. “Right when you see it about to happen, you gotta yank that electricity out of the wall, man.”