Handing over more of the decision-making to machines may ease the human burden in warfare, but those in charge should be wary of being lulled into a false sense of security, warn experts.

They caution that autonomous machines - such as drones and or advanced guided missile systems - might not only make the wrong decisions, they could also be used against us by hackers.

Failing to address either of these aspects, they said, could generate an 'almost limitless' potential for disaster.

Scroll down for video

Autonomous weapons, such as drones (stock image) or advanced guided missile systems, might not only make the wrong decisions, they could also be used against us by hackers. Failing to address either of these aspects, they said, could generate an 'almost limitless' potential for disaster, warn experts

The stark warnings follow in the wake of a recent report from the Center for a New American Security (CNAS) that raised concerns of the decision-making ability of autonomous weapons systems, including drones.

The Autonomous Weapons and Operational Risk report, published earlier this week, was produced by former US Secretary of Defence official Paul Scharre.

Now, leading the CNAS's programme on future warfare, Scharre questioned whether automation of weapons will lead to 'robutopia or robocalpyse.'

The report highlights that while no state has officially confirmed plans to build fully autonomous weapons, 'few countries have renounced them either.'

Autonomous weapons are in danger of going rogue, warn experts. Hollywood has long-warned of the threat of AI in weapons, such as in the movie Robocop (still pictured) where a droid fails to recognise civilians

Hollywood has long laid out the rocky road to autonomous weapons, with malfunctioning droids and robots a sci-fi staple.

A classic example comes from the 1987 movie Robocop, in which a security droid fails to recognise targets from civilians, with fatal - albeit fictional - results.

AUTONOMOUS WEAPONS COULD SPELL DISASTER Advances in artificial intelligence are taking us closer to fully automated version of drones and weapons systems. But Dr Sarah Kreps, an Associate Professor in Cornell University's Department of Government and an expert in drone warfare, cautions that following the road to autonomy will lead to two main problems. The security expert warns that machines lack subjective-decision making – which humans use to tell friend from foe – and are at risk of being hacked. The stark warnings follow in the wake of a recent report which raised concerns of the decision-making ability of autonomous weapons systems, including drones. Advertisement

Dr Sarah Kreps an Associate Professor in Cornell University's Department of Government and an expert in drone warfare, cautions that following the road to autonomous weapons will lead to two main problems: a lack of subjective-decision making - which humans use to tell friend from foe - and hacking.

Explaining the limitations of machine intelligence to recognise targets, the security expert highlights the need to keep humans in the frame.

The inherent confusion of a war zone can make it difficult to pick out those intent on doing harm from those caught in the crossfire.

'You can't put subjective decisions about who's a combatant or civilian into an algorithm. This has implications for targeting decisions,' said Dr Kreps.

'A human, or rather many humans, should be in the loop to analyse individuals' behaviours to see whether they are directly and actively involved in combat.

'Enemy status is often a subjective judgment and something that cannot easily be programmed into an autonomous weapon. We should not be lulled into thinking that technology can make these decisions easier.'

Experts warn machines lack subjective-decision making which humans use to tell friend from foe. If the machines pick the targets for missile strikes (stock image), can we be sure they are making the right choice?

But beyond the lack of human judgement, autonomy brings added the threat of cyber-attacks. If the security systems safeguarding the autonomous technology can be overridden by hackers, it could cause havoc on the battle field.

'There are benign cases of interruptions, like a computer bug, but also less benign cases like hacking' she explained.

'If groups can hack into the Pentagon's system of security clearances, they can almost certainly hack into the system that controls autonomous weapons, in which case the potential for disaster is almost limitless.'

The CNAS report states that, while difficult for security reasons, there is a need for greater transparency from countries on how they will likely approach autonomous weapons.

Sharre wrote: 'Few states have issued clear national policies on the use of autonomy in weapons. Given the potential for dangerous interactions between autonomous systems, a common set of international expectations is critical.'