Sci-fi loves to depict military AIs as malign killer minds or robots. But the truth is more subtle and more terrifying – and it's happening right now

Patryk Hardziej

“ONLY the dead have seen the end of war,” the philosopher George Santayana once bleakly observed. Our martial instincts are deep-rooted. Our near relatives chimpanzees fight “total war” that sometimes leads to the annihilation of rival groups of males. Archaeological and ethnographical evidence suggests that warfare among our hunter-gatherer ancestors was chronic.

Over the millennia, we have fought these wars according to the same strategic principles based in our understanding of each other’s minds. But now we have introduced another sort of military mind – one that even though we program how it thinks, may not end up thinking as we do. We are only just beginning to work through the potential impact of artificial intelligence on human warfare, but all the indications are that they will be profound and troubling, in ways that are both unavoidable and unforeseeable.

We aren’t talking here about the dystopian sci-fi trope of malign, humanoid robots with a free rein and a killer instinct, but the far more limited sort of artificial intelligence that already exists. This AI is less a weapon per se, more a decision-making technology. That makes it useful for peaceful pursuits and warfare alike, and thus hard to regulate or ban.

This “connectionist” AI is loosely based on the neural networks of our brains. Networks of artificial neurons are trained to spot patterns in vast amounts of data, gleaning information they can use to optimise a “reward function” representing a specific goal, be that optimising clicks on a Facebook feed, playing a winning game of poker or Go, or indeed winning out on the battlefield.

In the military arena, …