Addressing the council, however, he said, “It is clear that very strong forces, including technology and budgets, are pushing in the opposite direction.”

His initiative comes as nongovernmental organizations and human rights groups are campaigning to ban fully autonomous weapons to pre-empt deployment in the same way as the ban on blinding laser weapons. Discussions are under way with a number of governments that may be willing to take the lead in drafting a treaty to outlaw the weapons, Stephen Goose, director of Human Rights Watch’s arms division, told journalists in Geneva this week.

Supporters of the robots say they offer a number of advantages: they process information faster than humans, and they are not subject to fear, panic, a desire for revenge or other emotions that can cloud human judgment. Robots can be used to acquire more accurate battlefield data that can help to target fire more precisely and in the process may save lives.

A report by Human Rights Watch and the Harvard Law School cites a United States Air Force assessment that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes.”

Human rights groups dispute the ability of robots to meet the requirements of international law, including the ability to distinguish between civilians and combatants or to assess proportionality — whether the likely harm to civilians during a military action exceeds the military advantage gained by it. Moreover, in the event that a killer robot breaches international laws causing civilian casualties, it is unclear who could be held responsible or punished.

“It is possible to halt the slide toward full autonomy in weaponry before moral and legal boundaries are crossed,” Mr. Goose said in a statement this week, “but only if we start to draw the line now.”