The US, China, and Russia seem to be signalling that the time for AI in national security is now. With their militaries rolling out plans to move ahead with developing further applications of AI, and AI-driven, autonomous weapons, it’s time to address something pretty controversial: how AI weapons could actually help cut collateral damage. Still, who would want to face down a killer robot?



The Russian company that gave the world the Kalashnikov rifle has now unveiled a suicide #drone, officially named the KUB UAV. Story via @HoustonChron https://t.co/9FMmhKoZnW . Video via defenseupdate (YouTube). pic.twitter.com/bvVwago6OK — Robot&AIWorld (@RobotAndAIWorld) February 23, 2019

AI weapons can help, not hinder

When most of us picture autonomous weapons, we probably think of the bastardized, Hollywood versions of the uncontrollable, killing machines like RoboCop or the Terminator. The reality is that this depiction couldn’t be further from the truth. The current state of AI technology shows that it’s pretty much reliant on us. AI responds to data in the way that we teach it to. Research has shown that AI is still a long time away from being the technology that we know from sci-fi films and novels. In fact, it’s not likely that AI will be able to make its own decisions (outside of the realm of human interference) anytime soon.



According to Dr Larry Lewis, head of the first data-based approach to protecting civilians in conflict, “Country representatives have met every year since 2014 to discuss the future possibility of autonomous systems that could use lethal force. And talk of killer robots aside, several nations have mentioned their interest in using artificial intelligence in weapons to better protect civilians.”



That’s right – threats of autonomous Kalashnikov’s aside, AI could actually be used to cut collateral damage and better protect civilians. An AI-powered weapon could more efficiently and precisely target enemy fighters, and deactivate itself if it does not find the intended target. By effectively cutting out room for human error, AI weapons could reduce the risks associated with attacks – thereby helping to protect civilians.



Lewis continues, “Analyzing over 1,000 real-world incidents in which civilians were killed, I found that humans make mistakes (no surprise there) and that there are specific ways that AI could be used to help avoid them. There were two general kinds of mistakes: either military personnel missed indicators that civilians were present, or civilians were mistaken as combatants and attacked in that belief. Based on these patterns of harm from real world incidents, artificial intelligence could be used to help avert these mistakes.”



More than one way forward for AI

Although the discussion tends to focus on AI weapons, there are three different applications for artificial intelligence in the military:

Optimization of automated processing: improving signal to increase detection. Decision aids: Helping people make sense of complicated or large sets of data. Autonomy: AI taking action when certain parameters are met.

Those calling for autonomous weapons to be banned generally focus on “killer robots” and not on the alternative applications that AI has.



The US Defense Department’s 2018 AI strategy saw them committing to developing AI applications that would reduce the risk of civilian casualties. Australia is reportedly also planning to explore AI as a way to better identify medical facilities in conflict zones.



We’ve only just begun to explore how AI could cut collateral damage in conflict, but it’s a conversation that desperately needs to be had.



Killer robots



The conversation surrounding AI in the military tends to be centered only on “killer robots,” but the reality is that AI can actually help make conflict zones casualty free (for the most part). Unfortunately, civilians form the largest demographic of those killed during conflict, and that’s largely due to human error. Bringing AI systems into war can actually help reduce casualties. The fact is that we’ve been taught to picture AI as something out of science fiction – with a fully autonomous mind of its own, hellbent on the destruction of humanity. In reality, AI is nowhere near being able to make its own decisions, without the involvement of people. AI systems in the military will still be overseen by humans, they’ll just make us better at avoiding unnecessary civilian casualties. And that’s how AI could cut collateral damage.