The first meeting of the UN-backed group of experts, intended to start work on getting a ban on lethal autonomous weapons, was supposed to wrap up at the end of last week. But only days before it was due to start it was cancelled: funding shortfalls were blamed. A lack of will feels the more likely explanation. Alarmed by the delay the day it was due to begin, more than 100 of those most closely involved in developing the artificial intelligence on which such weapons would rely, led by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, wrote a public letter of bleak warning: killer robots amount to a third revolution in warfare, the sequel to gunpowder and nuclear weapons. They are right. The only thing more frightening than a machine that can’t decide for itself who to kill is one that can.

But the technology is out there, within reach of scientists backed by billions of dollars poured into the development of AI by the Pentagon’s Defense Advanced Research Projects Agency, or Darpa, and certainly matched in other less transparent regimes. Some semi-autonomous weaponry is already available, like the border guarding system on the ceasefire line between North and South Korea. The process of what its critics, such as the campaigning group Article 36, call “bureaucratising” weapons, where targets are defined according to an explicit hierarchy, is under way.

Scientific discovery and technological advance are never unmade; the history of asking states and their generals to abandon military advantage and behave morally, while not futile – there have been international laws of war for more than a century and in parts of the world for very much longer – has only limited success when applied to specific weapons, in certain times. For example, for the first half of the 20th century, the US banned unrestricted submarine warfare. It lasted less than 24 hours after Pearl Harbour.

Yet the morality of merely distancing human involvement from conflict has been anxiously debated at least since the invention of cannon; after the Austrians sent up balloons laden with bombs with timed fuses against forces defending Venice in 1849 (the wind changed, to Austrian disadvantage) balloon bombing was outlawed at the first Hague Peace Conference in 1899. It was one of the earliest such bans, but like the later prohibition on the use of chemical weapons – a rare success for the League of Nations in the Geneva Protocol of 1925 – the ban was the easier to negotiate for never having battle-winning potential. In contrast, attempts at a universal prohibition on bombing from planes, which had been used against military targets since the early years of the 20th century, never made progress despite lip service being paid to the idea in the 1930s by both Stanley Baldwin and Hitler. And even when bans were in place, as there was against chemical weapons, the US used napalm in Korea and Vietnam before ratifying the beefed up UN biological weapons convention that outlaws production, stockpiling and use in 1975. In the early years of debate on the development of autonomous weapons, the Pentagon argued that technology must be servant not master of the soldier. Not any more.

Yet the weight of public opinion – even when it is not a majority view – can stiffen governments against the demands of military expediency and strategic choice. CND protests from the Aldermaston marches of the 1950s to the Greenham Common peace camp in the 1980s played their part in building a climate that made arms control a diplomatic objective. But for all the progress in controlling nuclear weapons that has been made since the first test ban treaty was signed in 1963, no nuclear power has ever given up their capacity to launch a nuclear attack.

But the fact of removing human intervention from the decision to kill raises the most profound questions – both of international law and ethics and, as the next issue of the International Institute of Strategic Studies journal Survival argues, wider issues of global peace and strategic stability. Campaigners believe that through the UN they can build a coalition, a platform for a sustained campaign to control autonomous weapons. They are heartened by the backing of scientists who know most intimately how AI could be developed, and maybe learn to develop itself. By exploiting the power of stigma, they have won campaigns against anti-personnel mines and cluster bombs. The global order is more fragile than at any time since 1945; a hard place to build consensus. But theirs is an argument that must be made.