Authored by Emma Fiala via TheMindUnleashed.com,

Manhattan-based Clearview AI is collecting data from unsuspecting social media users and the Chicago Police Department (CPD) is using the controversial facial recognition tool to pinpoint the identity of unknown suspects, reads a report from the Chicago Sun-Times.

And according to a bombshell New York Times report, it is also being used by the FBI and the Department of Homeland Security.

"Until this week, I had not heard of Clearview AI," Gurbir Grewal, New Jersey’s attorney general, said in an interview. "I was troubled."



Read the full investigation on Clearview AI from @kashhill: https://t.co/3jOFOxzs5X — The New York Times (@nytimes) January 25, 2020

The software’s creator, Hoan Ton-That, maintains that it is purely “an after-the-fact research tool for law enforcement, not a surveillance system or a consumer application.” However, privacy advocates are saying this technology is so intrusive and ripe for abuse its use should be immediately halted. And earlier this month, a lawsuit was filed in federal court seeking to do just that.

Chicago attorney Scott Drury who filed the lawsuit describes CPD’s signing of a two-year, $49,875 contract with Illinois tech firm CDW Government to use Clearview AI’s software as “frightening.”

Conversely, Chicago police spokesman Anthony Guglielmi explains:

“Our obligation is to find those individuals that hurt other people and bring them to justice. And we want to be able to use every tool available to be able to perform that function, but we want to be able to do so responsibly.”

According to police, some CPD officials at the Crime Prevention and Information Center used the software for two months on a trial basis prior to the signing of the contract in January.

Despite the two month trial and the contract having been signed for approximately one month, CPD spokesman Howard Ludwig has declined to explain if and when Clearview AI has been used by the department thus far. Ludgwig explained:

“Any information about ongoing investigations can only come from cases that have been thoroughly adjudicated. We haven’t had Clearview long enough for any of the cases to have gone through the courts.”

Clearview AI’s database includes three billion photos taken from social media and network platforms such as Facebook, YouTube, and Twitter. The software searches its massive database for matches after users, including CPD, upload a photo of a suspect. The user is then provided with links for each image returned in the search that the company once told Green Bay police to “run wild” with in a marketing email.

Ton-That told the Sun-Times, “Our software links to publicly available web pages, not any private data.” It is clear he doesn’t seem to think the software poses any problems. But just this month, New Jersey’s attorney general Gurbir Grewal “put a moratorium on Clearview AI’s chilling, unregulated facial recognition software.”

BREAKING: @NewJerseyOAG put a moratorium on Clearview AI’s chilling, unregulated facial recognition software.



It scraped 3 billion photos online, including from social media, into an index of names + faces for police department subscribers.



This step is unequivocally good news. — ACLU of New Jersey (@ACLUNJ) January 24, 2020

The ACLU of New Jersey responded to the move in a tweet last Friday:

“Technology like this opens a Pandora’s Box for constant warrantless searches, pretty much of anyone with a photo and name online. It’s a tool that could make an already-unequal criminal justice system truly dystopian. New Jersey is right to slam this Pandora’s Box shut.”

Technology like this opens a Pandora’s Box for constant warrantless searches, pretty much of anyone with a photo and name online.



It’s a tool that could make an already-unequal criminal justice system truly dystopian.



New Jersey is right to slam this Pandora’s Box shut. — ACLU of New Jersey (@ACLUNJ) January 24, 2020

The ACLU also rightfully pointed out that tendency of facial recognition technology to have a bias against “people of color, women, and non-binary people.” In fact, as TMU has previously reported, self-driving cars are less likely to detect black people and artificial intelligence (AI) is sending the wrong people to jail.

Critics of Clearview AI and facial recognition software extend far beyond those involved in the lawsuit mentioned above and the ACLU of New Jersey. In what is the one of the biggest efforts to date in the battle against the use of facial recognition technologies, 40 organizations signed a letter to the Department of Homeland Security’s Privacy and Civil Liberties Oversight Board calling for the banning of the U.S. government’s use of such technology “pending further review.” The letter notes that this technology could be exploited and used to “control minority populations and limit dissent.”

However, as Fast Company points out, not everyone feels the same. In fact, some view actions like the letter mentioned above “an overreaction.”

Jon Gacek, head of government, legal, and compliance for Veritone—a company that provides technology like Clearview AI for law enforcement in both Europe and the U.S.—says all the software does is use “technology to do what police already do, except far faster and at less cost.”

Twitter responded to the bombshell NYT report by sending a letter to Clearview AI last week. In a follow-up report, the NYT explained:

“Twitter sent a letter this week to the small start-up company, Clearview AI, demanding that it stop taking photos and any other data from the social media website “for any reason” and delete any data that it previously collected, a Twitter spokeswoman said. The cease-and-desist letter, sent on Tuesday, accused Clearview of violating Twitter’s policies.”

While those concerned with privacy are typically focused on their photos and data being used by social media companies for profit, the situation with Clearview AI is an excellent reminder that what we post online generally can and will be used in ways we do not consent to despite our best efforts to keep up with Terms of Service updates and checking as many “opt-out” boxes as possible.