"Artificial intelligence is the future not only of Russia but of all of mankind ... Whoever becomes the leader in this sphere will become the ruler of the world."

Russian President Vladimir Putin made this statement to a group of students two weeks ago. Shortly thereafter, Tesla’s Elon Musk, who has worried publicly about the hazards of artificial intelligence (AI) for years now, posted an ominous tweet in response to Putin’s remarks.

“China, Russia, soon all countries w/ strong computer science,” he wrote. “Competition for AI superiority at national level most likely cause of WW3 in my opinion.”

It’s tempting to dismiss Musk’s tweet as alarmist, but is it? The comparative advantages that the first state to develop AI would have over others — militarily, economically, and technologically — are almost too vast to predict. So the stakes are as high as they get.

To dig a little deeper into Musk’s concerns, I reached out to Peter W. Singer, a strategist and senior fellow at New America. The author of Wired for War: The Robotics Revolution and Conflict in the 21st Century, Singer is a leading expert on 21st-century security issues and has even written a novel about what the next world war might look like.

I asked him if he thinks the race for AI superiority might lead to actual conflict. He’s more measured than Musk, but he concedes that the arms race is underway and the truth is that we simply don’t know what’s coming. “I think of AI technology as comparable to the printing press,” he told me. “It’s going to change the world, but we can’t possibly understand how and when.”

Our lightly edited conversation follows.

Sean Illing

I take it you saw Putin’s and Musk’s remarks about AI and the future. Your reaction?

Peter W. Singer

Elon Musk versus Vladimir Putin is not the greatest of ways to frame the debate. It’s taking it to the extremes. Musk has been very outspoken about his fears over robotics and AI. He’s always seen this as an existential threat. As for Putin, well, I don’t turn to former KGB agents for analysis of technology.

The fact that it's made such news points to a bigger issue, however. When we’re talking about this stuff, so many of our perceptions are shaped by extreme visions, most of them in the realm of science fiction. But the problem with that is that we’re always focusing on the long-term possibilities instead of the very real and very tough challenges that are in front us right now.

Sean Illing

Which are?

Peter W. Singer

Let me put it this way: We don't yet know whether artificial intelligence may become an existential threat. But as we speak AI is creating massive dilemmas for law, for policy, for business, and it hits everything from questions of your personal privacy to investments of government money to how we think about liability. I don't want to pour cold water on everything and I’m not dismissing Musk’s concerns, but focusing on the extremes diverts attention away from these very real and present dilemmas.

“The nations of the world are operating on the assumption that this is not just a game-changing technology, but whoever gets it will have an advantage, and that's why they have to be there first”

Sean Illing

I take all those points, but I do want to talk about the geopolitical implications of this race for AI superiority. When Putin says the leader in AI will become the leader in the world, I think he’s mostly right, and I think a lot of people see it the same way. Do you doubt what Putin said?

Peter W. Singer

Yes, I think there are reasons to doubt that because there are questions about how that AI might be converted into actual power. Ultimately, a lot hinges on how such a machine could be used and what resources it would have at its disposal. We just don’t know.

There are arguments that this technology is something that only governments could build and have, and only the most powerful governments in the world. Others argue that major breakthroughs are likely to happen in private companies, in Silicon Valley, and that this technology will be democratized. But we simply don’t know yet.

I think of AI technology as comparable to the printing press. It’s going to change the world, but we can’t possibly understand how and when. And it’s hard to predict who it will empower and who the winners and losers will be.

Sean Illing

But when we’re talking about the potential for conflict, we’re talking about perceptions and fears among states. And I suspect most leaders see this the way Putin does.

Peter W. Singer

I will say this: The nations of the world are operating on the assumption that this is not just a game-changing technology, but whoever gets it will have an advantage, and that's why they have to be there first, or if not, be rapidly there second. And so you are seeing massive investment on a national level, particularly by the US and China.

Sean Illing

Isn’t that belief alone enough to create a high-stakes, winner-take-all arms race among the major powers of the world?

Peter W. Singer

Absolutely. That's the definition of an arms race. It’s not just that you have two sides competing to build and buy new weapons, but that the very competition itself leaves each feeling less secure. Arms races are not just about the buildup in search of security, they're about the irony that you end up feeling less secure because of the competition.

Sean Illing

But we can’t stop this arms race, can we? The drive to develop this technology, to get there first, is overwhelming.

Peter W. Singer

I think you’re exactly right. People like Musk say we need to preempt this, that we have to stop it before it’s too late. But it doesn’t work like that. There are three forces driving this arms race: geopolitics, science, and capitalism. And there is no stopping that. This technological pursuit is already underway and it will continue whether we want it to or not. So any conversation about this problem has to begin there.

“There are three forces driving this arms race: geopolitics, science, and capitalism. And there is no stopping that.”

Sean Illing

When we think of arms races, most people think of nuclear weapons. But the race for AI strikes me as far greater and potentially more impactful. Everyone understood what nukes were and what the benefits of having them would be. With AI, we can’t even imagine how significant the impact would be. Presumably, a truly super-intelligent AI could be used to wage war, to transform economies, to eliminate entire industrial sectors, to manipulate financial markets, and god knows what else. So we know this technology will be transformative but we can’t fully predict just how transformative it will be. Does that not raise the stakes even higher?

Peter W. Singer

Absolutely. But I'd riff off that a little bit. So there is the winner-take-all scenario, as you paint it, that drives forward the notion of a race. But I think history suggests that it’s the transition periods that are most dangerous. So when we first develop these technologies, we don’t fully understand them, and so there’s this use-it-or-lose-it mentality. People want to strike first, to capitalize on their perceived advantages — that’s when we’re in real danger.

You see this, for example, with the atomic bomb and the Cold War. We had actual generals talking and thinking like Dr. Strangelove. Gen. MacArthur actually suggested tossing a couple of atomic bombs onto China because he thought that would solve the Korean War.

So these transitions periods are scary, and I think we have the same thing here when we talk about this technology. AI might eventually spur a revolution in technology and economics and politics — and that means there will be winners and losers. As with the Industrial Revolution, we won’t be able to predict the implications on the front end.

All of that is to say, we better be careful about how we develop and use these technologies because there will be sweeping consequences. But, again, I’m worried we won’t do this because of the winner-take-all logic of arms races.

“As for Putin, well, I don’t turn to former KGB agents for analysis of technology”

Sean Illing

I’m listening to this and I’m wondering why you think Putin and Musk are being extremist in their statements. What they’ve said seems perfectly reasonable to me.

Peter W. Singer

Let me reframe this again. I think we have to focus on realistic takes both on the technology itself and on what we can do about it. So, for example, if the fear is of an arms race, what are the types of actions that make arms races scarier, deadlier, more likely to break out into war? How do we deal with these problems of miscalculation and misjudgment?

Again, we have to be realistic about what can be done — right here, right now. You see this currently in the debate over killer robotics like drones. I don't think we're going to prevent the use of robotics and armed robotics in war. The question is, how do we create laws around them? Are there spaces where they won't be allowed versus where they will? Can we settle on laws that say robotics can be used in spaces where there’s lesser chance of civilian casualties?

These are the questions that interest me. I’m less interested in sci-fi scenarios or doomsday proclamations. We have an emerging problem and we’ve got to be honest and practical in our response to it.

Sean Illing

So you accept that every actor involved in this race for AI is going to pursue it relentlessly until someone wins?

Peter W. Singer

Yes, but we also have to understand that every actor faces shared risks and potential great harm. So we need to set up the structures, the understandings, the norms, maybe even the rules and laws, that will help us to navigate that race in a way that's less dangerous.

Ultimately, this is a problem that can only be managed if states and private companies work together. Everyone has a stake in this, and everyone has to be involved if we’re going to survive this transition into a world shaped increasingly by technology.