The Daily Star's FREE newsletter is spectacular! Sign up today for the best stories straight to your inbox Sign up today! Thank you for subscribing See our privacy notice Invalid Email

Campaign to Stop Killer Robots (CSKR) founding member Richard Moyes said "death and destruction" could be a harrowing consequence of AI in the battlefield.

AI is increasingly used by nations across the globe as they attempt to get ahead in the military arms race.

But Moyes warns that if robots are handed full autonomy, they could wipe out the very people they are supposed to protect.

Moyes told Daily Star Online: "It is unlikely militaries would want to use systems that present these kinds of vulnerabilities.

"But as systems get more complex there is always a risk that they will malfunction in unpredictable ways, possibly putting your own forces at risk."

He added: "Our main concern is that autonomous weapons, being allowed to operate independently over wider areas, and longer periods of time, cause death or destruction that a human commander is not able to foresee or predict.

"If we don’t know where weapons will be fired, or exactly what they will be fired against, a human can’t really make legal or moral judgements about the effects that they are creating through the use of such systems."

Moyes added: "We believe there needs to be new international law to ensure humans remain in control of weapons systems.

"This is about protecting civilians and human dignity – but it is also a practical issue for militaries.

"Soldiers don’t want to be sent into battle alongside systems that are unpredictable and might go off the rails."

CSKR has issued a frantic call for fully autonomous weaponry to be banned from the battlefield.

But CSKR claim their efforts have so far fallen on deaf ears.

Moyes added: "There is a danger that greater autonomy in weapons systems means that the users of weapons feel even more separated from the civilian population amongst whom weapons are being used.

(Image: GETTY)

(Image: GETTY)

(Image: GETTY)

"We understand that states want to reduce risks to their own forces, and we understand that humans can make mistakes – but handing decisions on the use of force over to machines would be fundamentally dehumanising.

"War is a messy business, and we shouldn’t kid ourselves that it can be made clean by handing it over to computers.

"There needs to be new international law to stop the push for autonomy in weapons systems from going too far.

"The UK has the technical skills to provide leadership on that – but instead of leading we are dragging our feet."

In September, the a ban on killer robots to select human targets without human orders was blocked by leading nations.

CSKR branded the move "shameful" in opposing the treaty, which covered AI-powered tanks, planes, ships and guns.

(Image: GETTY)

But countries including Australia, Israel, Russia, South Korea and the US called for further talks on the "benefits and advantages of autonomous weapons".

Noel Sharkey, a roboticist acting as spokesman for CSKR, said: "The two main options on the table for next year’s work were binding regulations in the form of a political declaration led by Germany and France and negotiations towards a new international law to prohibit the use and development of autonomous weapons systems led by Austria, Brazil and Chile.

"Cuba was particularly stubborn and would not accept any wording that even hinted that there might be any benefits.

"The others concede in the end with a compromise to take out the word 'risks' although the risks themselves remained.

"It is shameful that a handful of states can prevent the majority from moving towards negotiations that would regulate or prevent the use of these morally reprehensible weapons."