Technology allowing a pre-programmed robot to shoot to kill, or a tank to fire at a target with no human involvement, might only be years away.

And a new report from the UN warns of the dangers if terrorists got their hands on these kind of 'killer robots'.

The report, which was a result of a week-long meeting on such weapons, held in Geneva earlier this year, said swarms of autonomous weapons would be capable of carrying out attacks.

As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as 'killer robots,' are quickly moving from the realm of science fiction (like the plot of Terminator, pictured) toward reality

THE LAWS REPORT Experts from dozens of countries gathered in Geneva earlier this year to consider the implications of 'Lethal Autonomous Weapons Systems' (LAWS). These are weapons that would be capable of killing without a human at the controls. The UN report also said, 'swarms of such systems with complementary capabilities may carry out attacks.' 'In these scenarios where swarms of LAWS act as force multipliers, it would be unclear how meaningful human control could be maintained over the use of force, especially as the available time frame for human intervention is likely to be restricted.' Advertisement

Experts from dozens of countries gathered in Geneva earlier this year to consider the implications of 'Lethal Autonomous Weapons Systems' (LAWS).

These are weapons that would be capable of killing without a human at the controls.

The goal of the meeting was to begin the process of setting strict guidelines governing the use of killer robot.

This is to make sure they will not end up being used as indiscriminate weapons of war.

But the report said terrorists might not abide by these guidelines.

'Whilst these [robotic killing] systems might be available to technologically advanced countries in an initial phase, it is likely that they will proliferate,' the report said.

Experts from dozens of countries gathered in Geneva earlier this year to consider the implications of 'Lethal Autonomous Weapons Systems' (LAWS). Michael Møller, Acting Director-General UN Office pictured at Geneva

They also added that 'there may be no incentive for such actors to abide by international norms and this may further increase global or regional instability.'

The UN report also said, 'swarms of such systems with complementary capabilities may carry out attacks.'

'In these scenarios where swarms of LAWS act as force multipliers, it would be unclear how meaningful human control could be maintained over the use of force, especially as the available time frame for human intervention is likely to be restricted.'

A previous report called for humans to remain in control over all weapons systems at a time of rapid technological advances.

It said requiring humans to remain in control of critical functions during combat, including the selection of targets, saves lives and ensures that fighters comply with international law.

'Machines have long served as instruments of war, but historically humans have directed how they are used,' said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement.

'Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.'

Some have argued in favor of robots on the battlefield, saying their use could save lives.

Last year, more than 1,000 technology and robotics experts, including scientist Stephen Hawking, Tesla Motors CEO Elon Musk and Apple co-founder Steve Wozniak, warned that such weapons could be developed within years, not decades.

Some have argued in favor of robots on the battlefield, saying their use could save lives.

STEPHEN HAWKING WARNS OF A ROBOTIC UPRISING A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. On the Larry King Now show, Professor Hawking spoke of his fears about the future of the human race. 'I don't think advances in artificial intelligence will necessarily be benign,' Professor Hawking said. The physicists has previously been outspoken on his believes. Professor Hawking was interviewed from the Canary Islands, where he was being honored at the 'Starmus' Festival, aimed at making science accessible to the public. 'Once machines reach a critical stage of being able to evolve themselves we cannot predict whether their goals will be the same as ours.' 'Artificial intelligence has the potential to evolve faster than the human race.' Advertisement

In an open letter, they argued that if any major military power pushes ahead with development of autonomous weapons, 'a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.'

Professor Stephen Hawking reiterated his concerns just last week, speaking on the Larry King Now show.

'I don't think advances in artificial intelligence will necessarily be benign,' Professor Hawking said.

The physicists has previously been outspoken on his believes.

'Once machines reach a critical stage of being able to evolve themselves we cannot predict whether their goals will be the same as ours,' he added.

'Artificial intelligence has the potential to evolve faster than the human race.'

According to the London-based organization Campaign to Stop Killer Robots, the United States, China, Israel, South Korea, Russia, and Britain are moving toward systems that would give machines greater combat autonomy.

Human Rights Watch is a co-founder of the organization.

The UN meeting of experts on the issue, chaired by Germany, continued talks that took place in April 2015 and May 2014.

But Google chairman Eric Schmidt wrote in an opinion piece this week that everyone should 'stop freaking out' about artificial intelligence.

'The history of technology shows that there's often initial skepticism and fear-mongering before it ultimately improves human life,' Mr Schmidt said.

Mr Schmidt and Mr Thrus said while 'doomsday scenarios' deserve 'thoughtful consideration,' the best course of action is to get to work on creating solutions.