(Geneva) – An agreement on November 15, 2013, to begin international discussions on fully autonomous robot weapons is the beginning of a process that should conclude in a treaty banning these weapons, Human Rights Watch said today. Governments attending a weapons meeting in Geneva have agreed to begin international discussions in May 2014 on these weapons, which would select and engage targets without further human intervention.

“The decision to begin international discussions next year is a major leap forward for efforts to ban killer robots pre-emptively,” said Steve Goose, arms director at Human Rights Watch, a co-founder of the Campaign to Stop Killer Robots. “Governments have recognized that fully autonomous weapons raise serious legal and ethical concerns, and that urgent action is needed.”

Governments that are part of the Convention on Conventional Weapons (CCW) agreed to convene in Geneva on May 13-16 to discuss the issues related to “lethal autonomous weapons systems,” also known as fully autonomous weapons or killer robots. These weapons have not yet been developed, but technology is moving rapidly toward increasing autonomy.

The Convention on Conventional Weapons has been ratified by 117 countries, including those known to be advanced in autonomous weaponry: the United States, China, Israel, Russia, South Korea, and the United Kingdom. Adopted in 1980, this framework convention contains five protocols, including Protocol I prohibiting fragments which are not detectable by X-rays in the human body, Protocol III, prohibiting the use of air-dropped incendiary weapons in populated areas, and Protocol IV, which pre-emptively banned blinding lasers.

“A future Protocol VI prohibiting fully autonomous weapons would be the most important achievement in the life of the Convention on Conventional Weapons,” Goose said. “We urge governments to work swiftly to establish a requirement of meaningful human control over targeting and attack decisions.”

Other forums should urgently address fully autonomous weapons, Human Rights Watch said. The mandate to begin work on “lethal autonomous weapons systems” is broad enough to discuss the range of issues surrounding development, production, and use of fully autonomous weapons, including legal, technical, ethical, societal, humanitarian, and proliferation aspects.

The agreement to begin an international process on these weapons comes a year after Human Rights Watch and the Harvard Law School International Human Rights Clinic released “Losing Humanity: The Case against Killer Robots,” the first report to comprehensively outline concerns about these weapons and call for a pre-emptive ban. It comes seven months after the start of the Campaign to Stop Killer Robots, a global coalition of 45 nongovernmental organizations in 22 countries coordinated by Mary Wareham of Human Rights Watch.

In recent months, fully autonomous weapons have gone from an obscure issue to one that is commanding worldwide attention. Since May, more than 40 countries have spoken publicly on fully autonomous weapons, including Algeria, Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, China, Costa Rica, Cuba, Ecuador, Egypt, France, Germany, Ghana, Greece, Holy See, India, Indonesia, Iran, Ireland, Israel, Italy, Japan, Lithuania, Madagascar, Mexico, Morocco, Netherlands, New Zealand, Pakistan, Russia, Sierra Leone, Spain, South Africa, South Korea, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, and the United States. All countries that have spoken out have expressed interest and concern at the challenges and dangers posed by fully autonomous weapons.

“The issue of fully autonomous weapons is moving at a breakneck pace in the world of diplomacy and disarmament,” Goose said. “The race to stop killer robots reflects the degree to which the public views such a development with horror and revulsion.”

The agreement to begin international work will be likely to accelerate the development of national policies on fully autonomous weapons, Human Rights Watch said. The United States is the only country with a written policy in place. A Defense Department directive issued on November 21, 2012, requires that, for now, a human must be “in-the-loop” when decisions are made about using lethal force, unless department officials waive the policy at a high level.

The US policy directive, while positive, is not a comprehensive or permanent solution to the potential problems posed by fully autonomous systems, Human Rights Watch said. The policy of self-restraint it embraces may also be hard to sustain if other countries begin to deploy fully autonomous weapons systems.