NT: Let’s talk theoretically about Maven and something you said a minute ago, which is that Maven took humans out of the business of just scanning endless video and moved them to a higher-level task. But you can imagine AI doing that higher-level task, and the one above that, and even identifying targets or carrying out a mission. Where do you stop using AI and say that humans have to be involved?

WR: Our policy right now is that lethal decisions are always retained by people. And I don't see that policy changing anytime soon. I think the only thing that would raise the discussion is if there were just simply no way to compete without thinking about other options. But I don't really see a future where humans are going to be out of the loop. We're just going to be increasingly out of the loop on brute-force tasks.

"We're not only in a competition with other nations, we're in a period where technology changes at a rate it never has before. And the development system that we currently use in the Pentagon is simply one of the Cold War." Will Roper

You could imagine right now that AI does a pretty good job identifying houses and cars of a different type and that, in the future, you might go from recognizing your car to the type of car, and then the type of car to a specific car. But I don't think this nation is going to want to take lethal decisions out of the hands of people. You can't ask AI why it made a choice. It made the choice because that's what its training data said, and that's not a sufficient answer for most people. We want to be able to judge the judgment of someone making a decision, and AI doesn't give us that ability.

NT: Let me speculate for a second more, though. We're not that far off from a time when AI is definitively better at image recognition than a human. And I can totally see the argument why you’d always want a human to make an offensive lethal decision. But what if you flip it around? What if it's a missile defense system? Would you still want a human in the loop to make a decision to shoot down an incoming missile even if we knew that AI would be better and quicker at recognizing it?

WR: I think those things will fall into a different category. I don’t see it as being an issue if you have to hand a decision over to a weapons system when there’s an incoming ballistic missile or cruise missile. But I think when we're making a decision about human targets, there is going to be a desire to hold the deciding entity accountable for what they've done. And in order to even think about having AI move up into that level of judgment, we're going to need a different kind of AI because we'll need to understand not just what it recommends but why it recommends it. Step one for the Air Force is, we've got to learn how to use the AI that exists today smartly. And until we start pushing it into programs and learning what's easy and hard, we're keeping it in the world of speculation. I've joked with the Air Force that you can't spell Air Force without ‘AI.’

NT: Explainability is becoming a pretty hot debate in AI, and there are a bunch of people, very smart people, who say, you know, explainability is an unfair standard. If you ask a human why they made a decision, they can give you a story, but it might not really be why they made a decision. And so if we are demanding explainability on our AI algorithms, A) we’ll be much slower and B) we might be setting them to a standard beyond even what we set humans.

WR: No, I agree. I’m glad the researchers are working on explainability, but there's no guarantee it will happen. So maybe rather than have explainable AI or auditable AI, maybe it's just AI that can do research and fill in its training set when it makes mistakes—an AI that continues to learn and research. We’ve got to get something down a level from just simply giving us the best pattern match, and I'm glad that our research labs are working on it. I'm glad that commercial industry is working on it.