The world’s most powerful autocratic states have capability and intent to use AI to maintain dominance at home and beat enemies beyond, writes Peter Apps.

Chinese teenagers reported to the Beijing Institute of Technology, one of the country’s premier military research establishments last October.

Selected from more than 5,000 applicants, Chinese authorities hope they will design a new generation of artificial intelligent weapons systems that could range from microscopic robots to computer worms, submarines, drones, and tanks.

The programme is a potent reminder of what could be the defining arms race of the century, as greater computing power and self-learning programs create new avenues for war and statecraft.

It is an area in which technology may now be outstripping strategic, ethical, and policy thinking — but also where the battle for raw human talent may be just as important as getting the computer hardware, software, and programming right.

Consultancy PwC estimates that by 2030 artificial intelligence products and systems will contribute up to $15.7 trillion (€13.7tn) to the global economy, with China and the US likely the two leading nations.

But it is the potential military consequences that have governments most worried, fearful of falling behind — but also nervous that untested technology could bring new dangers.

In the US, Pentagon chiefs have asked the Defence Innovation Board — a collection of senior Silicon Valley figures who provide the US military with tech advice — to come up with a set of ethical principles for the use of artificial intelligence in war.

Last month, France and Canada announced they are setting up an international panel to discuss broadly similar questions.

So far, Western states have stuck to the belief that decisions of life and death in conflict should always be made by humans, with computers and algorithms simply supporting those decisions.

Other nations — particularly Russia and China — are flirting with a different path.

Russia — which last year announced it was doubling AI investment — said this month it will publish a new AI national strategy “roadmap” by mid-2019.

Russian officials say they see AI as a key to dominating cyberspace and information operations, with suspected Russian online “troll farms” thought to already be using automated social media feeds to push disinformation.

Beijing is seen as even further ahead in developing AI, to the extent that some experts believe it may already be beating the US.

Experts say achieving mastery in AI comes down to having sufficient computer power, enough data to learn from, and the human talent to make those systems work.

As the world’s most powerful autocratic states, Russia and China have that capability and intent, both to use AI to maintain government dominance at home and beat enemies beyond.

Already, Beijing is using mass automatic surveillance — including facial recognition software — to crack down on dissent, particularly in its ethnic Uighur Muslim north-west.

Along with Russia, China has many fewer scruples and controls than Western states when it comes to monitoring its citizens’ communications. Such systems will likely become more powerful as technology improves.

Traditionally, Western democracies — particularly America — have proved more adept than dictatorships at tapping new technology and innovation.

On AI, however, Washington’s efforts to build links between Silicon Valley and the military have been far from trouble-free. In June, employees at Google forced the firm to avoid renewing its contract with the Pentagon.

Many tech researchers are reluctant to work on defence projects, nervous they will end up building out-of-control robots that kill.

The US and its allies are still researching and building their own autonomous weapons.

In October, Microsoft quietly announced it intended to sell the Pentagon whatever advanced AI systems it needed to “build a strong defence”.

US Air Force leaders say its highly classified future long-range strike aircraft, designed to replace the B-2 stealth bomber, will be able to operate both with and without crew.

Western militaries are also ploughing growing resources into unmanned trucks and other supply vehicles, hoping to perform many more “dirty, dull, and dangerous” battlefield tasks without risking human personnel.

These dynamics will become much more complex with the growing use of drone swarms, in which multiple unmanned vehicles control themselves.

When it comes to drones fighting drones, Western policymakers are generally happy to let unmanned systems make their own decisions.

But when it comes to killing, defence department policy requires that a human must remain “in the loop”.

That may become ever harder to manage, however — particularly if an enemy’s automated systems are making such judgments at much faster than human speed.

By the early 2020s, Chinese scientists expect to be operating large unmanned and potentially armed submarines in the world’s oceans, aimed at enemy forces in disputed areas such as the South China Sea.

Such vessels could travel vast distances and remain concealed for long periods of time — China says a prototype drone “underwater glider” completed a record 141-day, 3,6109km voyage last month.

For now, Chinese researchers say any decision for such vessels to conduct attacks would still be made by human commanders – but that may not always remain the case.

In January last year, the Pentagon reported Russia was building and looking to operate its own large nuclear-powered unmanned submarines, likely capable of carrying nuclear weapons.

Both Moscow and Beijing are also prioritizing unmanned robot tanks, with Russia testing its latest version on the ground in Syria.

Such systems could dramatically complicate battlefield targeting decisions for Western commanders in any conflict, making it unclear whether individual vehicles or vessels contained human beings.

Mistakes could start or dramatically escalate wars.

In recruiting their 31 teenagers for the Beijing Institute of Technology, those managing selection reportedly looked for “willingness to fight”.

With technology so untested — and so potentially destructive — that may prove a very dangerous trait to prioritise.

Peter Apps is Reuters global affairs columnist