Russia started sabotaging the discussion from the very first session. Throughout the morning of Aug. 21, its diplomats at the United Nations in Geneva took the floor, nitpicking language in a document meant to pave the way for an eventual ban on lethal autonomous weapons, also known as killer robots, an emerging category of weapons that would be able to fight on their own and decide who to target and kill.

“They were basically trying to waste time,” says Laura Nolan of the International Committee for Robot Arms Control, who watched with frustration in the hall.

But while Russia vigorously worked to derail progress, it had a quieter partner: China.

“I very much get the impression that they’re working together in some way,” says Nolan. “[The Chinese] are letting the Russians steamroll the process, and they’re happy to hang back.”

China has stayed coy at these discussions, which have taken place at least once a year since 2014. Its delegates contribute just the minimum, and often send ambiguous signals on where they stand. They have called killer robots a “humanitarian concern,” yet have stepped in to water down the text being debated.

Stakes are high for the emerging military power. The robots in question — while not yet humanoid, techno-thriller Terminators — would nevertheless be deadly: Imagine dozens of drones swarming like bees on the attack, or intelligent vehicles patrolling a border with shoot-to-kill orders.

Read More: A Global Arms Race for Killer Robots Is Transforming the Battlefield

At times, Beijing has given some hope to activists demanding a ban on such weapons. According to the Campaign to Stop Killer Robots, the coalition Nolan’s organization is a part of, China last year joined 28 other states in saying it would support prohibiting fully autonomous weapons — but, Beijing clarified, just against their use on the battlefield, not their development nor production. That has raised eyebrows among experts skeptical of its intentions.

“They’re simultaneously working on the technology while trying to use international law as a limit against their competitors,” observes Peter Singer, a specialist on 21st century warfare.

Quite a few countries at these meetings might levy the same accusation against the United States. While Washington has not obstructed the talks, it has not appeared keen to move things forward, either.

People take part in a demonstration as part of the campaign "Stop Killer Robots" organised by German NGO "Facing Finance" to ban what they call killer robots on March 21, 2019 in front of the Brandenburg Gate in Berlin. WOLFGANG KUMM—AFP/Getty Images

Part of the reluctance from major military powers over a ban stems from the extent artificial intelligence (AI) has affected their defense industries. In addition to the U.S. and China, these states also include the U.K., Australia, Israel, South Korea, and a few others. But it is China that has become the most formidable challenger in the AI competition against the American superpower.

President Xi Jinping has called for the country to become a world leader in AI by 2030, and has placed military innovation firmly at the center of the program, encouraging the People’s Liberation Army (PLA) to work with startups in the private sector, and with universities.

Chinese AI companies are also making substantial contributions to the effort. Commercial giants such as SenseTime, Megvii, and Yitu sell smart surveillance cameras, voice recognition capabilities, and big data services to the government and for export. Such technology has most notably been used to police the country’s far western territory of Xinjiang, where the U.N. estimates up to 1 million Uighurs, an ethnic minority, have been detained in camps and where facial recognition devices have become commonplace.

“These technologies could easily be a key component for autonomous weapons,” says Daan Kayser of PAX, a European peace organization. Once a robot can accurately identify a face or object, only a few extra lines of code would transform it into an automatic killing machine.

In addition to technology from commercial companies, the PLA has said it plans to develop new types of combat forces, including AI and unmanned — in other words autonomous or near-autonomous — combat systems.

The country’s domestic arms industry has obliged. A few examples include manufacturer Ziyan’s new Blowfish A2 drone. The company boasts it can carry a machine gun, independently fly as a swarm group without human operators, and “engage the target autonomously.” On land, Norinco’s Cavalry, an unmanned ground vehicle with a machine gun and rocket launchers, advertises near autonomous features. And by sea, Chinese military researchers are building unmanned submarines. The 912 Project, a classified program, hopes to develop underwater robots over the next few years.

“Killer robots don’t exist yet, but what we see is a trend towards increasing autonomy,” says Kayser of PAX. “We’re very close to crossing that line, and a lot of the projects that countries are working on — of course they don’t say they’re going to be killer robots. But if we see terms like ‘autonomy in targeting’ — that’s getting very close to something that would be an autonomous weapon.”

All things considered, China’s behavior at the U.N. makes practical sense. Like other states, it is already developing intelligent weapons. The technology is fast outpacing the process at the U.N., where discussions will continue for another two years, if not longer. Without any clear international legal parameters, major militaries are feeling the pressure to invest in autonomous capabilities on the assumption that others are.

Such thinking especially characterizes the discourse around AI and autonomous weapons systems between China and the U.S.

“Essentially you have two sides that are worried about the other gaining an advantage,” says Singer. “That then has the ironic result of them both plowing resources into it, competing against each other, and becoming less secure.”

The other frontier unbound by international law is space. Here, China sees some opportunities to leapfrog American technology. It’s also where Beijing believes the U.S. would be most vulnerable in any conflict because of its dependence on information technology such as GPS, which not only helps soldiers and civilians get around, but services like stock exchanges and ATMs.

The country’s Shiyan-7 satellite, able to maneuver and dock with larger space objects, would in theory, experts say, also be able to latch on to and disable enemy space assets. More recently, China has been testing satellite SJ-17. It moves around with precision at very high altitudes — 22,000 miles above Earth. Satellites in orbit fly at tens of thousands of miles per hour. They possess the kinetic potency to shatter anything in their path, essentially acting as kamikazes against another country’s satellite. The U.S. military worries this is what China has in mind when developing satellites that can move so unusually in space.

Advanced space weapons, killer robots, and the U.S. and China preparing for World War III. It may all sound surreal, like a spectacular science fiction, but in the staid halls of the U.N., over the draft documents bureaucrats pass around, they are exactly what countries are anticipating. What makes their work more challenging than past international weapons bans is the preemptive nature of it, and the technology involved that would make enforcement and verification difficult, if not impossible.

Kayser knows time is running out. “An AI arms race would have no winners,” he says. Preventing one from happening would depend on the major powers. He isn’t optimistic.

“They are not taking their responsibility to ensure that international peace and security is maintained. They are actually taking steps that are dangerous and risky for international peace.”

Contact us at letters@time.com.