NYPD detectives identified the mystery man who sparked terror by leaving a pair of rice cookers in a subway long before their bomb-squad counterparts even realized the devices were harmless — thanks to controversial face-scanning technology.

It was about 7:15 a.m. on Aug. 16 when police began receiving reports of the suspicious devices in the Fulton Street subway station.

Cops were unsure whether the devices were some slob’s garbage or part of a terror plot akin to that of 2016 Chelsea bomber Ahmad Rahami, and the station was cleared out as a precaution.

But within minutes, Det. Marcello Gianquinto — one of 10 detectives in the NYPD’s Facial Identification Section — had pulled still images of a suspect from subway security footage, color-corrected the snaps and was using a computer system to compare them to mug shots in the NYPD’s arrest database.

The system spit out a few hundred potential matches, and by 7:45 a.m., Gianquinto had already narrowed them down to Larry Griffin II — a homeless West Virginia man arrested by the NYPD in March for drug paraphernalia at a Harlem men’s shelter — by comparing distinguishing features on the security footage with Griffin’s social media.

By 8:15 a.m., Gianquinto had brought his findings to the head of the facial recognition unit, Sgt. Edwin Coello, for a second opinion — and then blasted them out to the cell phone of every cop in the department.

That’s nearly an hour before the NYPD bomb squad gave the rice cookers the all-clear just after 9 a.m., police said.

“Five years, ago you probably have endless detectives looking through videos … images of arrested individuals based on descriptions,” Coello said. “It could take several hours or several days. This is the most important type of case that we’d see out there: a possible terrorist attack in NYC.”

Cops collared Griffin in a Bronx apartment at 1 a.m. Saturday and charged him with a trio of felonies for placing fake bombs.

Despite the speedy collar, the facial-recognition tech — designed by French security firm Idemia and South Carolina law-enforcement technology company DataWorks Plus — has come under scrutiny over its accuracy and because the NYPD has not released a comprehensive public policy on how the department uses it.

While the NYPD only uses arrest photos as a reference — not driver’s licenses or other government-owned images — the department took heat earlier this month for comparing suspect images against juvenile arrest photos.

San Francisco — one of the nation’s most tech-friendly cities — has already banned law enforcement’s use of facial recognition software. And California state lawmakers are also considering banning the use of facial recognition technology by body-worn cameras.

NYPD brass crowed on Tuesday, however, that Griffin’s arrest was the “perfect example” of the new policing tool’s potential.

“To not use technology like this would be negligent. This is the future,” said Chief of Detectives Dermot Shea. “We are putting in place a system where there’s checks and balances to make sure we do our due diligence to make sure it’s a high probability the person who we are looking for.”

The NYPD is still without a comprehensive public policy on the use of facial recognition software, which it has employed since 2011.

Last year, the NYPD invited the NYCLU and city council members to weigh in the department’s drone policy, which led to the ban of using facial recognition software on drone footage.