CPD using controversial facial recognition program that scans billions of photos from Facebook, other sites

Share All sharing options for: CPD using controversial facial recognition program that scans billions of photos from Facebook, other sites

The Chicago Police Department is using a controversial facial recognition tool that allows investigators to search an image of unknown suspects to see if it matches a database of three billion photos lifted from websites like Facebook, YouTube and Twitter — a technology privacy advocates say is so ripe for abuse that cops should stop using it immediately.

Clearview AI, the Manhattan-based firm that developed the software, has come under fire after the New York Times published a bombshell report detailing the privacy concerns its technology has brought to the fore. A lawsuit was filed in federal court in Chicago earlier this month seeking to halt the company’s data collection.

“It’s frightening,” Chicago attorney Scott Drury, who filed the lawsuit, said of CPD’s decision to team up with Clearview AI.

But Chicago police spokesman Anthony Guglielmi said facial recognition software like Clearview adds “jet fuel” to the department’s ability to identify and locate suspects.

“Our obligation is to find those individuals that hurt other people and bring them to justice,” Guglielmi said. “And we want to be able to use every tool available to be able to perform that function, but we want to be able to do so responsibly.”

Two-year contract

Chicago police entered into a two-year, $49,875 contract on Jan. 1 with the Vernon Hills-based tech firm CDW Government to use Clearview’s technology, according to a statement issued by the department.

For at least two months before that, select officials at the CPD’s Crime Prevention and Information Center used the software on a trial basis after another law enforcement agency recommended the technology, police said.

CPD spokesman Howard Ludwig declined to provide examples of when Clearview has been used so far.

“Any information about ongoing investigations can only come from cases that have been thoroughly adjudicated,” said Ludwig. “We haven’t had Clearview long enough for any of the cases to have gone through the courts.”

“Our obligation is to find those individuals that hurt other people and bring them to justice. And we want to be able to use every tool available to be able to perform that function.”

Clearview’s index of photos was built with images scraped from millions of websites. Law enforcement officials that license the technology can upload a picture to the Clearview app, which then matches it with other photos from the database and offers links to the sites where the images were posted and other information.

Hoan Ton-That, the creator of the Clearview technology, described the software as “an after-the-fact research tool for law enforcement, not a surveillance system or a consumer application.”

“Our software links to publicly available web pages, not any private data,” Ton-That told the Sun-Times last week.

Thirty CPD officials working at CPIC now have access to the technology, according to police. All have obtained “top applicable national security clearance and have clear privacy guidelines to ensure individual privacy, civil rights, civil liberties and other interests are protected,” police said.

Police said officials with access to Clearview can only use the tool in conjunction with an active criminal investigation, adding that it’s not used for live surveillance or keeping tabs on protesters, for example.

‘They should stop using it immediately’

But despite the assurances, Karen Sheley, an attorney for the American Civil Liberties Union of Illinois, said it was short-sighted for the city to sign a contract to use the software without any public input.

“It’s an incredible absence of judgment to sign on to this kind of technology when it’s so new and without public vetting,” said Sheley. “We think they should stop using it immediately.”

An official in at least one state has already moved to limit the use of Clearview software, which is reportedly used by hundreds of law enforcement agencies. New Jersey Attorney General Gurbir Grewal told prosecutors that cops should stop using it in his state.

The lawsuit, which seeks class-action status, was filed against Clearview last week in the U.S. District Court for the Northern District of Illinois. The suit seeks an injunction that would result in the “disabling and deletion of Clearview’s database,” as well as damages for the plaintiff, David Mutnick, an Illinois resident. The first court date is set for March 23.

Morning Edition Get more of our award-winning local news and political coverage sent directly to your inbox every morning by signing up for our Morning Edition newsletter. Subscribe

It alleges the firm violated the Illinois Biometric Information Privacy Act, a law that protects Illinoisans from having biometric information collected without their consent.

“What Defendant Clearview’s technology really offers then is a massive surveillance state with files on almost every citizen, despite the presumption of innocence,” Drury, of the Chicago firm Loevy & Loevy, wrote in the suit. “Indeed, one of Defendant Clearview’s financial backers has conceded that Clearview may be laying the groundwork for a ‘dystopian future.’ Anyone utilizing the technology could determine the identities of people as they walked down the street, attended a political rally or enjoyed time in public with their families.”

The suit notes that Clearview has “sought ways to implant its technology in wearable glasses that private individuals could use,” although Ton-That told the New York Times the company didn’t plan to release it.

The lawsuit also charges Clearview with collecting data without people’s knowledge or consent, searching and collecting images without probable cause and denying people the right to due process by violating the terms of websites — many of which forbid image scraping — where photos in the database were originally posted. (Twitter has called on the company to stop taking photos from its site.)

Ton-That did not respond to a request to comment on the lawsuit.

CPD has been using facial recognition for years

CPD’s partnership with Clearview is an extension of the city’s efforts to use facial recognition technology to fight crime, which date back more than a decade.

In 2009, the city was awarded a $13.8 million grant from the the U.S. Department of Homeland Security to finance the CTA’s “Regional Transit Terrorism and Response System.” An application form detailed the department’s intention to utilize facial recognition technology. It’s unclear if any terror-related arrests were made using the software.

After creating the digital infrastructure to run facial recognition, the CPD used $1.3 million of grant money in 2013 to contract South Carolina-based DataWorks Plus as part of an overarching deal with Motorola. Since the initial facial recognition contract expired at the end of 2015, the CPD has renewed it on a yearly basis at a total cost of more than $400,000.

DataWorks didn’t respond to a request for comment.

Before adding Clearview to the mix, investigators could use the application to compare photos of suspects against the CPD’s database of roughly four million mugshots. Those searching DataWorks can pull photos from roughly 35,000 surveillance cameras across the city, including those maintained by the Chicago Transit Authority, the Chicago Housing Authority, Chicago Public Schools and other city agencies.

Documents show the system CPD purchased also could be used to do real-time surveillance, something that has raised fears of a Big Brother-like surveillance state that could track and potentially identify people regardless of whether they are involved in criminal activity. The addition of Clearview’s billions of images — which include more than just mugshots — to the CPD’s arsenal has further stoked those concerns.

But officials claim the CPD has never used the real-time application to conduct surveillance.

A dozen searches a day

They have, however, frequently utilized mugshot-matching technology.

From 2013 to November of last year, investigators conducted 28,205 searches, or an average of 12 a day. But in the peak year of use, 2016, the number of queries spiked to more than 17 searches per day.

A CPD PowerPoint explaining how to use the software to detectives in 2015 noted the system “can develop leads where there was none but a picture.”

“This is an investigative tool that’s easy and fun to use,” one slide says.

The CPD’s Guglielmi said the data is used to narrow a search for a suspect, not end it.

“What the facial-matching program does is it allows us to get a universe of people that could be the individual accused of a crime,” Guglielmi said. “And then from there, we still get a photo array, we still follow the legal process to identify this person.”

Because the software is used with other investigative tools, police said they couldn’t say how many facial recognition searches led to arrests or convictions.

A selfie match

One conviction using the technology originated last February in Edgewater.

After swiping a cellphone and some jewelry from a car, Lamont Hines decided he’d use the spoils for a photo shoot.

Flaunting a stolen charm necklace, Hines snapped a couple selfies with the phone. Unbeknownst to him, the shots were quickly uploaded to the victim’s iCloud account. Detectives used facial recognition software to match the shots to Hines’ mugshots from previous arrests.

Hines, 42, of South Shore, was then taken into custody and charged with a felony count of theft. He pleaded guilty to the charge in September and was given two years of probation, court records show.

Use of facial recognition software has support of the public, a poll released by the Pew Research Center in September found. Some 56% of Americans said they trust law enforcement officials to use the technology responsibly.

Mistakes possible

But facial recognition technology can make mistakes, which some fear could lead to false arrests if someone is wrongly matched to a photo of a suspect. A study released in December by the National Institute of Standards and Technology found many systems misidentified people of color more often than whites. That is of particular concern with Clearview’s much larger database of photos to search.

“The idea that every single photo is correctly tagged with the right identity online is just not a realistic viewpoint,” said Clare Garvie, a senior associate at Georgetown Law’s Center on Privacy and Technology, who has published studies on the use of facial recognition technology.

“What Defendant Clearview’s technology really offers then is a massive surveillance state with files on almost every citizen, despite the presumption of innocence.”

Despite the privacy concerns, the CPD’s use of facial recognition technology didn’t need approval from the City Council and and there have been no public hearings on its use. There’s also limited federal oversight over how law enforcement agencies use facial recognition, meaning the CPD and other departments are largely left to police themselves.

That lack of oversight has led Lucy Parsons Labs, a Chicago nonprofit that advocates for police accountability, to call on Mayor Lori Lightfoot to ban all city agencies from using the technology, similar to San Francisco and Oakland.

“We would call for community oversight on all surveillance technologies,” said Freddy Martinez, executive director of Lucy Parsons Labs, which provided the Sun-Times with documents it obtained through Freedom of Information requests that detail the city’s use of facial recognition technology.

Candidate Lightfoot called for moratorium

Before she was elected mayor last April, Lightfoot told the ACLU of Illinois that she would go as far as to halt the use of the technology while convening a panel to investigate its use.

“During this process I will place a moratorium on the use of facial recognition technology or its expansion absent an emergency situation arising from a legitimate law enforcement need,” she wrote in an ACLU questionnaire.

But since taking office, no review or moratorium has taken place. In fact, with the addition of Clearview to CPD’s facial recognition arsenal, the city’s capabilities have only expanded under her watch.

Asked about the expansion late last week, Lightfoot reiterated her vow to review the city’s use of the technology with the assistance of privacy advocates and community members.

The city aims to advance protections and integrate “national best practices for the use of this technology to ensure nothing but the full protection of personal and constitutional rights for our residents and visitors,” her office said in a statement.

The city “is working responsibly to confront the risks and promises of these tools,” Lightfoot’s office said.

“[W]e take seriously our obligation and duties to ensure the safety of all of our communities while balancing the privacy concerns of our residents that can stem from new and emerging technology.”

Concerns remain

Despite the assurances, Garvie, of the Georgetown Law Center, said the CPD’s procurement of real-time software is tantamount to “a police department secretly acquiring a tank and parking it behind their office.”

“Then, when a reporter finds the tank and asks about it, the police response is, ‘Don’t worry, we don’t use that tank,’” said Garvie.

She said now “the question is, why did they have it in the first place? What is the purpose of that tank, if not to be used?”