As the debate over facial recognition software heats up, Detroit Mayor Mike Duggan took to social media last week attempting to quell concerns that city police would be using the technology for neighborhood surveillance.

While more than $1 million was spent in 2017 on facial recognition software, a definitive sounding tweet sent from Duggan's account last Monday seemed to spell out that Detroit police were not and would not be using the technology.

"Let me be clear: there will be no facial recognition software used with live stream video by the (Detroit Police Department). That's not what we're doing, and that's not ever what was intended," Duggan tweeted, linking to a six-minute video of him speaking on the subject.

The tweet and subsequent video seemed to shut down any notion that the department was using facial recognition software, a technology which has been widely criticized for issues ranging from privacy overreach to high-error rates, specifically when used on black and brown individuals. But experts say Duggan's carefully scripted language is disingenuous, with the deception hinging on the term "live stream video."

What the mayor is actually saying is the city is not using the technology on livestream video but rather still images taken from videos — a distinction which those studying the technology say doesn't actually mean much.

"In all my experience with facial recognition the way the process and programming works is that it takes a still image from the video. I’m not knowledgeable of any facial recognition software that’s taking real video," said Daniel Lawrence, a senior research associate in the Urban Institute's Justice Policy Center. "It's taking a still from a video."

Others who say they are aware of the two types of technologies say trying to differentiate between the two is disingenuous and a distraction.

"Face recognition on live video is indeed considered the most invasive form of tracking. But non-live video use of face recognition is still immensely problematic," said Alvaro Bedoya, the founding director of Georgetown Law Center on Privacy & Technology, whose colleagues in May published a report detailing Detroit's 2017 purchase of facial recognition software from South Carolina-based company DataWorks Plus — a purchase of a plan that does have "livestream video" abilities.

The Free Press asked the administration why more than $1 million was spent on a product with a function that it doesn't plan to use — city officials referred the question to the police department.

“The software was not purchased because of the livestreaming facial recognition feature” said police department spokeswoman Sgt. Nicole Kirkwood. “But because it allows the Detroit Police Department to electronically match still images of violent crime suspects gathered during investigations to DPD mug shots, Michigan State Police mug shots and the Michigan Sex Offender Registry. Prior to purchasing this software, this process had to be done manually by a person physically going through mug shot books.”

“If the software identifies a potential match, that image has to receive two independent approvals. The image is first reviewed by the DPD crime analyst who identified the match, then a second crime analyst must agree that both images match. Finally, a supervisor must confirm that the images match. Images are only sent to detectives for further investigation after all three steps have taken place.”

Bedoya, former chief counsel to the U.S. Senate Subcommittee on Privacy, however, points out that even putting aside the contract question, the distinction the mayor is making distracts from the bigger issue: Facial recognition software is being used. And the way it's being used — still images — still has issues.

"All of the bias issues of face recognition were all detected on systems that perform face recognition on still video," Bedoya said, pointing out that two of the most discussed studies on facial recognition shortcomings — one on gender from MIT, and one testing the accuracy on members of Congress — all dealt with still photo images.

"Even if the mayor doesn’t use the real-time system he bought," Bedoya continued referring to the $1 million contract with DataWorks, "he is still proposing to use a face recognition system on African American residents on whom the technology is likely to fail."

"If I take a photo of you entering a cancer clinic, an AA meeting, or a couples therapist, it doesn’t matter if I have video of you, a single still image is all I need to embarrass you and violate your privacy. The idea that the city is using this technology at all, let alone without any kind of court oversight, is highly problematic."

Since May, when Georgetown published its report dealing with Detroit, facial recognition software has been the center of remarkably intense debates in the city — discussions so heated that they seemed to reach their apex last week when a member of the Detroit Board of Police Commissioners was arrested at a meeting where the technology was being discussed.

At the center of the debate seems to be a general confusion over whether or not the technology has been used already, and if it will continue to be used in the future.

Last month, the police commissioners were asked to vote on a new policy directive detailing guidelines for using the technology. At the last minute, the directive was pulled by the Detroit Police Department and no vote occurred. According to Gregory Hicks, the board's secretary, a new vote has not been scheduled.

"We are still awaiting the Chief to submit the revised proposed directive," Hicks wrote in an email Monday.

The directive's mere existence, however, raised a host of questions: If facial recognition software has already been in use for at least a year and a half, what guidelines are currently in the place, and why was this not voted on earlier?

The Free Press reached out to the mayor's office to find out what guidelines are already in place, but the administration has not provided the information.

Language games and a capitalization on general confusion around the technology have been a core feature of the city's surveillance initiatives, according to Eric Williams, an attorney working with the American Civil Liberties Union (ACLU) on a committee opposing the city's surveillance tactics.

"Because they keep breaking it up — just traffic cameras, just Green Light participating businesses, just Green Light corridors — you can’t comprehend," said Williams, pointing to the fact that the facial recognition conversation being held with the police board revolved around the "Neighborhood Real-Time Intelligence Program," a new $9 million initiative that Duggan announced in March at his State of the City address, that would use local and federal traffic modernization funds to put high-definition cameras at various intersections in the neighborhoods.

In Williams' view, the names of the various surveillance programs serve to confuse the public about what is really going on, and how wide of a network exists.

For all intents and purposes, the public-facing push to ramp up surveillance in Detroit began in January 2016 when DPD launched Project Green Light. Starting with eight gas stations, the program has participating businesses pay for and install surveillance cameras on their property that feed directly to DPD's Real Time Crime Center. Additionally, as part of the program, businesses commit to ensuring they have robust lighting and a green light outside of their vicinity to let customers know they are a part of the program.

Today, more than 500 businesses — including churches, schools, and pharmacies — are a part of the program. There are also two "Green Light Corridors" and one public housing property signed on to be part of the program.

Up until this spring it was accepted — though not widely broadcast — that facial recognition technology was being used on camera feeds connected to Project Green Light businesses.

In fall 2015, right before the launch of Project Green Light, Detroit's Office of Contracting and Procurement was working on a request for proposal for "Facial Recognition Software solutions" to be utilized by DPD's Real Time Crime Center (live video feeds) and investigation personnel (static images).

In July 2017, the city entered into a three-year, $1,045,843, contract with South Carolina-based facial recognition software company DataWorks Plus, which pitched FACE Watch Plus in its bid for the contract.

"FACE Watch Plus tracks face images from live video surveillance, processes the images, then searches your database and alerts you when a match/hit has been made," the proposal explained. "It detects faces within surveillance footage in real-time, then uses cutting-edge facial searching algorithms to rapidly search through your agency's mug shot or watch list database for positive matches."

The contract explained that the purchase of facial recognition licensing, software and equipment was for "Project Green Light Locations."

When the Free Press visited the Police Department's Real Time Crime Center in January 2018, and asked about Project Green Light and facial recognition software, Craig conceded that the department was using facial recognition software.

"We have facial recognition software, from the system we can go in — we’re in the process, that’s still evolving, it’s still early. We use the state system, so if we needed, or we had a suspect, we can apply facial recognition once it’s here. We capture the images here in (the) real time crime (center)," the police chief said, later brushing aside questions about facial recognition technology being used for petty crimes and misdemeanors.

"Any time we use facial recognition, we’re talking about violent crime; this is not anything to do with something that — I really wouldn’t want it used that way, we simply don’t have time to use it that way. What we do use it for is if we’re going after a violent felon. Someone who maybe kidnapped a child."

In recent months, however, the city has walked back this connection completely saying that while the contract states that the goal was to use facial recognition software in conjunction with Project Green Light, this never ended up happening.

"We’ll use Green Light cameras, so that may be where some of the confusion is," Craig said at a news conference last month. "But we’ll use cameras from any source, not just Green Light. The facial recognition software was never going to be installed in the cameras themselves; that would be cost-prohibitive, and there’d be no need to do it anyway, since we’ll use footage from those cameras, along with other cameras."

The statement — facial recognition technology wasn't part of Green Light because the physical cameras don't have the technology embedded within — is, according to critics like Williams a language game and an attempt to play on the public's confusion with how the technology works.

"The city itself doesn’t really make a distinction between these cameras internally. I think part of the reason they describe these programs as distinct when they’re talking to the public is if people really realized the extent of police surveillance in the city they would be mortified," said Williams, pointing out that the issue is not whether facial recognition software is being used in conjunction with a specific program, or if its being used on real-time cameras versus still images, but rather that it's being employed at all.

As Craig even conceded, the Green Light cameras may not have facial recognition technology within them, but at the Real Time Crime Center they can be fed through the DataWorks software.

The malleable statements around the technology and seeming word games was at the heart of the Georgetown report, which noted that nowhere on the Green Light website was the use of face recognition, real-time face surveillance, or any kind of automated face analysis technology mentioned.

The lack of public input and general confusion is a problem according to academics watching from the sidelines — especially when considering the fact that nationally, beyond Detroit, the technology is currently being questioned.

In May, San Francisco became the first major city to ban police use of the technology, citing several of the concerns Williams and other critics of the software have raised.

"We have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here," Aaron Peskin, the city supervisor who sponsored the bill, told the New York Times.

In recent weeks, legislators on Capitol Hill have also been debating the use by the FBI.

"No elected officials gave the OK for the states or for the federal government, the FBI, to use this," U.S. Rep. Jim Jordan, R-Ohio, said during a hearing before the House Committee on Oversight and Reform in May. "There should probably be some kind of restrictions. It seems to me it's time for a timeout."

In Detroit, it seems unclear what's even being used or not used — and who has given these approvals. The confusion, according to Tom Ivacko, associate director of the University of Michigan's Center for Local, State and Urban Policy, will likely come to a head.

"It is not clear that these issues should even be decided at the local level, when such fundamental American rights are at risk," Ivacko wrote in an email.

"As noted in the recent report by Georgetown's Center on Privacy and Technology, even with good policies and practices, facial recognition technology may introduce exactly the risk that the U.S. Supreme Court found unconstitutional in Carpenter v. United States — that "a person does not surrender all Fourth Amendment protection by venturing into the public sphere

"Because of these kinds of concerns, an earlier 2016 report by Georgetown called for state legislatures to regulate this technology, rather than leaving it to local governments. This seems like an issue destined for the U.S. Supreme Court at some point, but in the meantime and at a minimum it should be the subject of extensive public debate."

Contact Allie Gross: AEGross@freepress.com. Follow her on Twitter @Allie_Elisabeth.