Facial recognition: Do you really control how your face is being used?

Terry Collins | Special for USA TODAY

Show Caption Hide Caption You might be in a police lineup right now and not even know it As police embrace new facial recognition technology, many fear false matches could lead to wrongful arrests.

The fight over the use of our faces is far from done.

A raging battle over controversial facial recognition software used by law enforcement and the civil rights of Americans might be heading to a courtroom.

The latest salvo includes the American Civil Liberties Union suing the FBI, the Department of Justice and the Drug Enforcement Agency for those federal agencies' records to see if there is any secret surveillance in use nationwide. The lawsuit, filed Oct. 31, comes as organizations and law enforcement are going toe-to-toe over what is private and what isn't.

A facial recognition system uses biometric software to map a person’s facial features from a video or photo. The system then tries to match the information on databases to verify someone’s identity. Police departments regularly use facial recognition to find potential crime suspects and witnesses by scanning through millions of photos; the software is also used to provide surveillance at public venues like concerts and schools and used to gain access to specific properties.

But there’s organized opposition against it, buoyed after California passed a law that puts a temporary ban on police across the state from using facial recognition in body cameras. The move comes while more than half of Americans polled in a recent Pew Research Center survey trust that officers would use the software responsibly.

"Although more and more people are waking up to the fact that facial recognition technology is dangerous, the fight over how to regulate it will be long and hard," said Evan Selinger, a renowned privacy expert and philosophy professor at the Rochester Institute of Technology.

When your face is the key

Facial recognition may increasingly become the new norm.

Apple’s iPhone uses its Face ID facial recognition authentication system to help unlock the device for users, and is the subject of a $1 billion lawsuit. Social media giant Facebook uses facial recognition to recognize when members or their friends are tagged in photos. Some U.S. airports use facial recognition scanners in cooperation with the government to improve how travelers enter and exit the U.S., and some major airlines use facial recognition to help passengers check in flights, luggage and boarding.

The National Human Genome Research Institute also uses facial recognition to detect a rare disease that causes a change in appearance known as DiGeorge syndrome.

Currently, there are no federal regulations on the use of this technology for commercial or government use as several questions emerge about whether facial recognition violates the First Amendment, granting certain freedoms including speech, religion, assembly and the press; the Fourth Amendment, which protects people from unlawful searches and seizure; and the 14th Amendment, which guarantees equal protection of the laws.

Some companies may also use facial recognition to pitch products based on our social media profiles. This practice occurs despite the fact that many Americans despise the practice, according to the Pew Center survey; others may use the technology when it comes to people getting a loan to purchase a car or a house, and even hiring for jobs.

The problems with facial recognition

In some instances, how you look may matter now more than ever before.

"If the American way of life is going to be preserved – free speech, free association and free movement – the best (way) forward is to acknowledge that, at present, facial recognition technology is a uniquely threatening tool of oppression that can't be contained through traditional governance," Selinger said. Along with professor Woodrow Hertzog, a professor of law and computer science at Northeastern University, Selinger has spent years studying the subject.

While Congress has held multiple hearings about whether to ban or regulate facial recognition, law enforcement contends that the software is an invaluable tool that can quickly root out dangerous people.

In one case, authorities in Pennsylvania last year used facial recognition to catch a man accused of sexually assaulting a teenage girl in 2016. Authorities used an updated driver’s license photo of the man who they said groped the girl at her home shortly after they met online.

According to a report from the Government Accountability Office, there are more than 640 million facial photos that are available for use that come from databases that can be searched by the Facial Analysis, Comparison, and Evaluation, also known as FACE, an internal unit of the FBI.

The GAO reports that the images come from state and federal databases including driver’s licenses and visa application photos. However, privacy advocates and several politicians counter that facial recognition violates Americans' rights against government surveillance by scanning people without their permission.

"These federal searches of state DMV records are a massive breach of privacy and trust. Americans don't expect – and certainly don't consent – to be surveilled just because they get a license or ID card. This has to stop," Sen. Patrick Leahy, D-Vt., tweeted in July.

Racial implications of facial recognition

Race is also at play when it comes to the criticism of using facial recognition. Three years ago, the ACLU revealed police agencies across the country had been monitoring protesters and activists by running photos on Facebook and Twitter through third-party facial recognition software. Police were using the software during the protests in Ferguson, Missouri, following the death of Michael Brown in 2014 and protests in Baltimore following the 2015 death of Freddie Gray to find, and in some cases, arrest protesters with outstanding warrants.

Privacy experts at Georgetown University once estimated that more than 117 million American adults are in facial recognition networks used by law enforcement and that African Americans, in particular, would face more scrutiny, compared to other ethnicities.

The problem with AI? Study says it's too white and male

Those against facial recognition also believe there isn't enough government oversight, arguing that the software isn't foolproof. They point to studies that show inaccuracies and a higher rate of error overall for persons of color and women.

"I think the lawsuit, and the intense debate, illustrates how harmful this technology is for justice and democracy and how unready for prime-time facial recognition is on our communities," said Rashad Robinson, the executive director for Color of Change, a leading online racial justice organization.

The issue became more magnified after two MIT researchers published a study in 2017 called Gender Shades that showed bias in some of the most used facial recognition systems. The study said systems from Microsoft and IBM were better at identifying the gender of white men’s faces than with the gender of darker-skinned or female faces.

Then there’s another study that found similar issues with Rekognition, a face-scanning technology created by Amazon that faced public scrutiny for providing cloud services to U.S. government agencies that could target or identify immigrants in homeland security investigations.

However, the ACLU reported last year in its study of Rekognition that the software incorrectly matched 28 members of Congress and the matches were disproportionately of people of color. One of those false matches included civil rights icon Rep. John Lewis, D-Ga. The Congressional Black Caucus wrote to Amazon CEO Jeff Bezos expressing its concern, concluding that such data “could negatively impact” communities of color and others.

“We are troubled by the profound negative unintended consequences this form of artificial intelligence could have for African Americans, undocumented immigrants and protesters,” CBC chair Rep. Cedric Richmond, D-La., said.

In June, hundreds of anonymous Amazon workers wrote to Bezos with similar concerns, saying they refuse to build technology that supports U.S. Immigration and Customs Enforcement and violates human rights.

Microsoft and IBM say they have improved their respective systems, while Amazon said it has updated its system even though its researchers say they found no accuracy problems.

Facial recognition laws and lawsuits

There could be some changes on the federal level. Rep. Ayanna Pressley, D-Mass., is co-sponsoring federal legislation with Rep. Yvette Clarke, D-N.Y., prohibiting facial recognition in federally funded public housing. Pressley's fellow member of "The Squad," Rep. Rashida Tlaib, D-Mich., also has introduced legislation banning using federal funds to purchase facial recognition technology.

In its federal lawsuit filed in Massachusetts, the ACLU said it initially made public records requests in January asking the FBI, DOJ and DEA for its surveillance records but has yet to see them. The ACLU counters that even in the "unlikely event" facial recognition was 100 percent accurate, "this dystopian surveillance technology threatens to fundamentally alter our free society."

The three federal agencies have declined to comment because of the lawsuit.

In September, a bipartisan group of eight lawmakers sent a letter to FBI Director Christopher Wray and former acting secretary for the Department of Homeland Security Kevin McAleenan asking about how those departments use facial recognition technology and its safeguards. The agencies have yet to respond.

So far, several U.S. cities have taken or are considering taking action. In June, the New York Assembly passed a bill to ban facial recognition in schools across the state. A month earlier, San Francisco, the home to giants Twitter, Uber and Salesforce, became the first city in the world to ban city agencies from using facial recognition systems. Across the Bay Bridge in California, Oakland and neighbor Berkeley soon followed, as well as the Boston suburb of Somerville, Massachusetts.

Other Massachusetts cities – including Brookline, Springfield, and Cambridge – are considering similar bans.

Who decides: humans versus A.I.

There is a common fear among people that too much authority and decision-making will be done by a machine that has no capability to reason, said Tim Dees, a retired police officer and a columnist for PoliceOne.com who has written extensively about law enforcement and its use of technology.

Every law enforcement technology – whether it's facial recognition, automated license plate readers or automated fingerprint ID systems – has to have a human in the loop, Dees said. The machine does the work of deciding which records in the database are not a match to the image or data that are submitted to it and tells the operator, he said.

For example, Dees said, law enforcement officers using the technology are thinking, "those other 10,000 data points I scanned are not a match to the person or thing you're looking for, but this one might be."

The human operator, he added, has to put that "match into context," and compare it with other information that's known about the person or something of interest, like age, height, weight and ethnicity, and decide whether to investigate further.

"People fear these systems because they can't argue with them, and they don't want their fate controlled by a machine," Dees said. "Law enforcement tools should always have a human in the loop, and the humans have to have the public welfare at heart.

"You have to identify and hire good people to be cops," he concluded.

Still, facial recognition is very "Big Brother surveillance," said Jennifer Lynch, a surveillance litigation director for the San Francisco-based Electronic Frontier Foundation, who also has written extensively about the technology.

She firmly believes that Americans should not have to worry if they are tracked or not.

"I think I find it terrifying," Lynch said. "You don't know who's watching your every move."