One day after the revelation Toronto police tested a controversial facial recognition technology, two more Greater Toronto Area police services have revealed they have also used the U.S.-based Clearview AI investigative tool.

Officers in Peel and Halton regions confirmed to the Star Friday that they ran tests of the highly divisive artificial intelligence-powered facial recognition tool. Both have stopped using the technology.

Halton Regional Police confirmed it began a “free trial” of the tool in October 2019. It has since stopped and an internal evaluation is underway.

Unlike Toronto police Chief Mark Saunders — who was apparently unaware his officers were testing the tool, and ordered them to stop once he learned they were — Halton Chief Stephen Tanner confirmed in an email Friday he knew his officers “were doing a pilot project with facial recognition software of some type.”

A Peel Regional Police spokesperson Sgt. Joe Cardi confirmed Friday that a “demo product of the Clearview AI software” was provided to the service for testing purposes only. Peel police Chief Nishan Duraiappah has since directed that testing “cease until a full assessment is undertaken pursuant to a concurrent ongoing facial recognition review project.”

“Our review team is working closely with the Information and Privacy Commissioner’s Office as well as other police organizations to ensure any future application of facial recognition technology is in keeping with privacy legislation, guidelines and contemporary standards,” Cardi said.

“The Clearview AI technology was never used in any active investigations,” Cardi said. “The direction was made on Jan. 27 2020 to cease any active testing.”

Spokesperson Const. Ryan Anderson said Halton police is also not making any decisions regarding future use of the application pending the review by the Ontario Information and Privacy Commissioner and the Ministry of the Attorney General.

In a statement, Ontario’s Information and Privacy Commissioner Brian Beamish “urged” organizations to contact the office “if they are considering using new technologies that could pose a potential privacy risk to citizens.”

After the Star informed Beamish’s office that other Ontario police services had tested the tool, Beamish said Friday afternoon that they should “stop this practice immediately and contact my office.”

“I’ve also asked my staff to contact those we’ve become aware of through the media to discuss the legality and privacy implications of their use of this technology,” he said.

Beamish’s office was unaware Toronto police were using Clearview AI technology until contacted by them last week. Noting there are “vital privacy issues at stake with the use of any facial recognition technology,” Beamish said “we are relieved that its use has been halted.”

“We question whether there are any circumstances where it would be acceptable to use Clearview AI,” he said.

Called “reckless” by critics, the much-criticized Clearview AI technology uses a database of billions of images scraped from the open web — including social media sites — and identifies people by scanning for matches. The company’s law enforcement tool first came to light in a New York Times story last month.

The technology can be used by police to run an image of a person — a suspect, person of interest or possible victim — against this vast database, finding matches in images that have been drawn from across the web. Previously, the Toronto police told the Star it was only judicially authorized to use facial recognition technology to search for matches in its internal mugshot database.

Clearview AI did not respond to a series of questions from the Star, including whether the company had approached Canadian police services to offer a free trial, and if so, when.

The company also did not respond to a question about the identity of an officer behind a testimonial on the company’s website. The cop is identified only as a “detective constable in the Sex Crimes Unit” of a Canadian police service.

“Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. Within a week and a half of using Clearview, [we] made eight identifications of either victims or offenders through the use of this new tool,” the officer is quoted as saying on Clearview AI’s website.

Toronto police revealed Thursday that its officers began “informally testing” Clearview AI in October 2019. Once Saunders learned about this, he ordered them to stop, Toronto police spokesperson Meaghan Gray said Thursday.

Ontario’s Information and Privacy Commissioner and the Crown Attorney’s Office have been asked by Toronto police to review the technology’s appropriateness, Gray said.

“At no time was the Clearview AI technology used for livestreaming or real-time information gathering and there were no costs associated with its use,” Gray said in an email Friday.

“Our current review includes a comprehensive analysis of each time the technology was accessed by an investigator,” she added.

It’s unclear whether any arrests were made as a result of Toronto police officers’ use of the technology.

Loading... Loading... Loading... Loading... Loading... Loading...

Christian Leuprecht, a policing and technology expert affiliated with Royal Military College of Canada and Queen’s University, said the use of Clearview AI is part of a larger trend where “the technology evolves much faster than we have the ability to actually think about it from an accountability and review perspective.”

“I think the lesson from this is, when this type of technology comes out, any sensible police force would go to their privacy commissioner in their province” to get clear directions on how it can be used, he said.

In the last month, YouTube, Facebook, Twitter, and LinkedIn have all demanded that the company stop using data scraped from their websites, according to media reports.

Wendy Gillis is a Toronto-based reporter covering crime and policing. Reach her by email at wgillis@thestar.ca or follow her on Twitter: @wendygillis

Read more about: