A UN panel met this week to discuss ‘killer robots’ amid concerns that not enough is being done to rein in and monitor advances in artificial intelligence. It agreed to define and set limits on weapons that kill without human involvement.

The event, held in Geneva, marked the first formal meeting of government experts in lethal autonomous weapons systems with a total of 86 countries taking part. The meeting was chaired by India’s Ambassador Amandeep Gill, who said the group discussed creating a legally binding code of conduct or technology review process for future weapons, AP reports.

The group, which is part of the UN’s Convention on Certain Conventional Weapons, focused on defining exactly what killer robots are, and how much human interaction is involved. In theory, fully autonomous computer-controlled weapons don’t yet exist, so crafting such a definition presents a challenge.

Is it time to take killer robots seriously?! RT looks at how new technology is bringing autonomous weaponry into real life https://t.co/akktxGFUKHpic.twitter.com/cdsKZssD6W — RT (@RT_com) November 13, 2017

The Campaign to Stop Killer Robots group spoke at the five day event, and warned of the dangers posed by ‘killer robots.’

It presented an unnerving video showing what could happen if autonomous weapons were used against governments, university students and by terrorists. The campaign says 22 countries are in favor of a ban on such weapons, and wants to prevent humans being removed from targeting and attack decisions.

“Ladies and gentlemen, I have news for you: the robots are not taking over the world. Humans are still in charge,” Gill said at the event.

The US said it was “premature” to define what killer robots are,arguing in written comments that autonomous weapons could “reduce the likelihood of inadvertently striking civilians.”

READ MORE: ‘AI may replace humans’ & become new form of life – Stephen Hawking

Prominent figures like SpaceX founder Elon Musk and Stephen Hawking have been vocal in their warnings of the dangers posed by artificial intelligence; mainly the threat that the machines could become too powerful.