British military drones bringing death from above could be capable of firing on targets without the need for a human operator.

A new drone being developed by French and British military contractors for use by the RAF, is being built with capabilities of selecting and engaging targets using artificial intelligence.

While human intervention is required under international law, the Taranis drone could potentially become fully autonomous if the laws change, taking humans out of the loop and leaving the decision-making to the machines.

Scroll down for video

A new drone being developed by French and British military contractors for use by the RAF, is being built with capabilities of selecting and engaging targets using artificial intelligence, removing the need for humans in the decision-making process. Pictured is the Taranis drone during a test flight

Developed by BAE Systems, the Taranis drone is named after the Celtic god of thunder and is designed to stealthily approach and attack targets, without being detected.

It is being developed under a £1.5bn ($2.16bn) Anglo-French contract which aims to deliver an autonomous drone by 2030, with £120m ($173m) invested in a feasibility phase to develop future unmanned combat systems.

BAE Systems describes Taranis as being designed ‘to demonstrate the UK’s ability to create an unmanned air system which, under the control of a human operator, is capable of undertaking sustained surveillance, marking targets, gathering intelligence, deterring adversaries and carrying out strikes in hostile territory’.

Developed by BAE Systems, the Taranis drone is named after the Celtic god of thunder and is designed to stealthily approach and attack targets, without being detected (pictured)

BAE is reported to be working on the basis of human-based decision-making for attacks, but an executive said the firm is working on the basis that capability for autonomous strikes might be needed in future (Taranis pictured)

ARE HUMANS STILL NEEDED FOR MACHINES TO KILL? Under current international law, autonomous weapons systems such as drones still need a human operator to ‘push the button’ and fire on a target, but in future this control could be handed over to the machines. But the concept of fully autonomous weapons systems is a matter of great debate with implications for defence, politics and humanitarian issues. Experts have warned that following the road to autonomy will lead to two main problems. Machines lack subjective-decision making – which humans use to tell friend from foe – and are at risk of being hacked. These stark warnings confirm concerns raised in a report on the decision-making ability of autonomous weapons systems, including drones. Advertisement

According to The Times, the defence firm is working on the basis of human-based decision-making for attacks, but Taranis’ programme manager, Clive Morrison, said they were working on the basis that capability for autonomous strikes might be needed in future.

Ultimately, this could mean that if laws changes, the decision to engage targets could be made entirely by the machines.

Earlier this week the drone was put through its paces in a series of test flight at BAE’s test site in Warton, Lancashire.

Defense News reported that analysis of the test flights is ongoing, and that BAE executives remained tight-lipped about the number of missions or flight hours have the craft has racked up to date.

A BAE spokesperson did tell reporters: ‘We pushed the boundaries an awful lot…there are a lot of smiling faces around here at the moment.’

The Times reported that at BAE's showcase of the drone on Wednesday the Ministry of Defence banned journalists from taking photos, with observers forced to keep at least 15 metres (49 ft) from the aircraft.

Earlier this year, military experts cautioned that autonomous machines - such as drones and or advanced guided missile systems - might not only make the wrong decisions, they could also be used against us by hackers.

Failing to address either of these aspects, they said, could generate an 'almost limitless' potential for disaster.

Explaining the limitations of machine intelligence to recognise targets, security experts have highlighted the need to keep humans in the frame.

They cite the inherent confusion of a war zone can make it difficult to pick out those intent on doing harm from those caught in the crossfire.