A recent post on the New York Times’s At War blog begins with this hypothetical scenario:

It’s a freezing, snowy day on the border between Estonia and Russia. Soldiers from the two nations are on routine border patrol, each side accompanied by an autonomous weapon system, a tracked robot armed with a machine gun and an optical system that can identify threats, like people or vehicles. As the patrols converge on uneven ground, an Estonian soldier trips and accidentally discharges his assault rifle. The Russian robot records the gunshots and instantaneously determines the appropriate response to what it interprets as an attack. In less than a second, both the Estonian and Russian robots, commanded by algorithms, turn their weapons on the human targets and fire. When the shooting stops, a dozen dead or injured soldiers lie scattered around their companion machines, leaving both nations to sift through the wreckage — or blame the other side for the attack.

Although the Times post is largely about lethal autonomous weapons systems and where things stand on their development and use during ongoing armed conflicts, this introductory paragraph actually tees up a different question—one that involves the initial resort to force by a state. Will we soon be in a place where autonomous systems lead states into armed conflict in the first place? How will AI change the way states make decisions about when to resort to force under international law? Will the use of AI improve or worsen those decisions? What should states take into account when determining how to use AI to conduct their jus ad bellum analyses?

Noam Lubell, Daragh Murray, and I have just published an article that begins to consider these questions. Much has been written about the development of AI and machine learning in other areas of the law, including criminal justice, self-driving cars, and administrative decision-making, but this is, we think, the first project to consider the role of AI in the resort to force.

Here’s an abstract of the article: