Financial Times innovation and technology editor John Thornhill on Monday touched upon the relationship between human values and computer algorithms by asking whether artificial intelligence will be shaped more profoundly by the values of the Western world or Communist China, which is pushing hard for a leading position in AI technology.

“Computer algorithms encoded with human values will increasingly determine the jobs we land, the romantic matches we make, the bank loans we receive and the people we kill, intentionally with military drones or accidentally with self-driving cars,” Thornhill noted.

The most cynical AI theorists might add that we need to think about who we’re going to kill deliberately with self-driving cars. Any such system will inevitably include crisis situation judgments that prioritize the well-being of some humans over others. Does the self-driving car swerve to avoid a child in the middle of the road if that means plowing into a table full of diners outside a cafe?

“How we embed those human values into code will be one of the most important forces shaping our century. Yet no one has agreed what those values should be. Still more unnerving is that this debate now risks becoming entangled in geo-technological rivalry between the U.S. and China,” Thornhill proposed.

The Financial Times editor pointed out that at least 50 different codes of ethics for artificial intelligence have been published to date. The contributions from Chinese corporations and universities are markedly different from those put forth by Western entities:

Codes of principles written in the west tend to focus on fairness, transparency, individual rights, privacy and accountability. But Song Bing, director of the Berggruen Institute China Centre, argued at the seminar that this jars with Chinese sensibilities. “These values are mostly western in origin. That does not mean that there is no resonance in China and the rest of the world. But are they the right set of rules for a global normative framework?” she asked. Ms Song said that Chinese AI ethicists prioritise values that are open, inclusive and adaptive, speak to the totality of humanity and reject zero-sum competition. Summarising this philosophy, she told the seminar said that they add up to “great compassion and deep harmony”. Collective good is just as important as individual rights. However, Liu Zhe, a philosopher from Peking University, said it would be wrong to believe that there was any one Chinese value system, mixing as it does elements of Confucianism, Daoism and Buddhism. That range of values would militate against a universal approach to AI within China, let alone elsewhere. Zeng Yi of the Chinese Academy of Sciences in Beijing also questioned the need for a global set of principles. “They should not compete with each other, but complete each other to provide a global landscape for AI,” he said.

Liu Zhe may have forgotten that China is an authoritarian collectivist society, and increasingly a one-man dictatorship under President Xi Jinping. The Communist Party is unlikely to allow conflicting sets of rules to frolic among the philosophical wildflowers of Confucianism, Daoism, and Buddhism. It doesn’t even allow people to embrace those philosophies when they conflict with Communist Party dogma, let alone semi-autonomous machines.

Zeng Yi’s notion of competing ethical systems jousting with each other until the perfect code of conduct for AI appears is interesting in theory, but unrealistic in a highly interconnected world. Chinese AI will interact in a consequential manner with Americans, and American AI will affect the lives of Chinese citizens.

China is not exactly shy about feeding huge amounts of information about both its own nationals and foreigners into AI systems, a point Thornhill illustrated by citing American concerns about Chinese ownership of the gay dating app Grindr. There will be strong demand around the world for at least a minimal set of universal ethical guidelines.

Zeng also commented that Chinese researchers think of humans as “the worst animals in the world,” so inferior human ethical standards should not be applied to artificial intelligence. Thornhill reported that Western participants in the seminar where he spoke found this a “false and dangerous premise.” Teaching intelligent machines to regard humans as the worst beings in all creation? What could go wrong?

Long before we have to worry about self-aware computers treating humanity as a virus to be cured, we must concern ourselves with the contrast between Western privacy concerns and China’s “techno-utilitarian” model, which essentially means prioritizing the “greatest good for the greatest number” over the “moral imperative to protect individual rights.”

Thornhill found it a distressingly short hop for techno-utilitarianism to go from facial recognition scanners on grocery store shopping carts (so convenient for the customers!) to the nightmarish surveillance state China has established in the restless Xinjiang province.

The Chinese believe techno-utilitarianism gives them a huge advantage in technologies such as self-driving cars because they have few privacy scruples about assembling the massive databases that fuel AI development, they tend to deploy new technology quickly and deal with complications later, and they don’t have to worry about political resistance against government support for programs that would be intensely opposed in the U.S. or Europe.

Time will soon tell if Western concerns for privacy and individual rights are as much of an obstacle to AI development as the Chinese think… and whether those concerns are ultimately overridden in a hyper-connected world where techno-utilitarianism is the prevailing ideology.