One month, 500,000 face scans: How China is using AI to profile a minority

The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as 1 million of them in detention camps.

Now, documents and interviews show that authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.

The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.

The technology and its use to keep tabs on China’s 11 million Uighurs were described by five people with direct knowledge of the systems, who requested anonymity because they feared retribution. The New York Times also reviewed databases used by the police, government procurement documents and advertising materials distributed by the companies that make the systems.

Chinese authorities already maintain a vast surveillance net, including tracking people’s DNA, in the western region of Xinjiang, which many Uighurs call home. But the scope of the new systems, previously unreported, extends that monitoring into many other corners of the country.

Police are now using facial recognition technology to identify Uighurs in wealthy eastern cities like Hangzhou and Wenzhou and across the coastal province of Fujian, said two of the people. Law enforcement in the central Chinese city of Sanmenxia, along the Yellow River, ran a system that over the course of a month this year screened whether residents were Uighurs 500,000 times.

MBA BY THE BAY: See how an MBA could change your life with SFGATE's interactive directory of Bay Area programs.

Police documents show that demand for such capabilities is spreading. Almost two dozen police departments in 16 provinces and regions across China sought such technology beginning in 2018, according to procurement documents. Law enforcement from the central province of Shaanxi, for example, tried to acquire a smart camera system last year that “should support facial recognition to identify Uighur/non-Uighur attributes.”

Some police departments and technology companies described the practice as “minority identification,” though three of the people said that phrase was a euphemism for a tool that sought to identify Uighurs exclusively. Uighurs often look distinct from China’s majority Han population, more closely resembling people from Central Asia. Such differences make it easier for software to single them out.

For decades, democracies have had a near monopoly on cutting-edge technology. Today, a new generation of startups catering to Beijing’s authoritarian needs are beginning to set the tone for emerging technologies like artificial intelligence. Similar tools could automate biases based on skin color and ethnicity elsewhere.

“Take the most risky application of this technology, and chances are good someone is going to try it,” said Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”

From a technology standpoint, using algorithms to label people based on race or ethnicity has become relatively easy. Companies like IBM advertise software that can sort people into broad groups.

But China has broken new ground by identifying one ethnic group for law enforcement purposes. One Chinese startup, CloudWalk, outlined a sample experience in marketing its own surveillance systems. The technology, it said, could recognize “sensitive groups of people.”

“If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear,” it said on its website, “it immediately sends alarms” to law enforcement.

In practice, the systems are imperfect, two of the people said. Often, their accuracy depends on environmental factors like lighting and the positioning of cameras.

In the United States and Europe, the debate over artificial intelligence has focused on the unconscious biases of those designing the technology. Recent tests showed facial recognition systems made by companies like IBM and Amazon were less accurate at identifying the features of darker-skinned people.

China’s efforts raise starker issues. While facial recognition technology uses aspects like skin tone and face shapes to sort images in photos or videos, it must be told by humans to categorize people based on social definitions of race or ethnicity. Chinese police, with the help of the startups, have done that.

“It’s something that seems shocking coming from the U.S., where there is most likely racism built into our algorithmic decision making, but not in an overt way like this,” said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation. “There’s not a system designed to identify someone as African American, for example.”

The Chinese artificial intelligence companies behind the software include Yitu, Megvii, SenseTime, and CloudWalk, which are each valued at more than $1 billion. Another company, Hikvision, that sells cameras and software to process the images, offered a minority recognition function, but began phasing it out in 2018, according to one of the people.

Yitu and its rivals have ambitions to expand overseas. Such a push could easily put ethnic profiling software in the hands of other governments, said Jonathan Frankle, an artifical intelligence researcher at the Massachusetts Institute of Technology.

“I don’t think it’s overblown to treat this as an existential threat to democracy,” Frankle said. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

Paul Mozur is a New York Times writer.