Facial recognition technology available to law enforcement officials in April failed to identify Dzhokhar Tsarnaev from security camera images in time to prevent a public appeal—and the subsequent murder of an MIT police officer and shootout between the Tsarnaev brothers and police. But experimental technology from Carnegie-Mellon University's CyLab Biometrics Center came tantalizingly close. The software being developed there could soon make facial recognition software much more powerful in generating leads for law enforcement.

The technology, called single image super-resolution, will be highlighted tonight on PBS in an episode of NOVA that looks into the science behind the Boston Marathon bomber manhunt. In a phone interview with Ars, Dr. Marios Savvides, the director of the CyLab Biometrics Center, said that the new technology could generate results much more detailed than those made by traditional image enhancement approaches. "The traditional methods yield about a 2 times to 4 times improvement" in the resolution of a facial image, he said. "This method gets us 16 times the resolution."

Faces from pixels

CyLab, which has participated in a number of federally sponsored biometrics projects, had submitted a proposal to the Department of Justice to perform a trial of the new technology just a month before the Boston bombing, Savvides said. "We were working on the system," he said, "but we weren't even ready for a demonstration."

But as the FBI published the photos of the suspects on Thursday, April 18, Savvides' team decided to try to help authorities in making an identification of the suspect that would later turn out to be Dzhokhar Tsarnaev.

"What the FBI released is what they had and couldn't do anything with," Savvides said. "After [the Tsarnaevs] were ID'd on April 19, more images came to light. But the images available on April 18 were very bad. The imagery was too small for current technology—the information just wasn't there [in the images]."

Hoping to aid law enforcement, Savvides' team took the released photo from the FBI website and ran it through an early version of the enhancement software. The software is a machine learning system "trained" with a database of 30,000 faces presented in multiple resolutions. The algorithms constructed through training can draw from the system's experience and reconstruct an approximation of a face based on patterns within facial images, with as little as six pixels between the eyes of a suspect. As a result of the training, the software can essentially reconstruct a face based on the relationship between pixels and human-assisted identification of facial landmarks, producing what Savvides called a "hallucination" of the individual's face from negligible amounts of image data.

To assist the algorithm, Savvides and his team took the one mostly front-facing image of Dzhokhar Tsarnaev and estimated where "landmarks" on his face were, giving the software 79 distinct points to use as a starting point. By Friday morning, they had a result. The CyLab team sent off an image, but the gun battle in Watertown had already happened and the suspects had been identified.

After the fact, Savvides' team tested to see how successful their efforts would have been if they had been able to apply the image to an image database. Embedding a higher-resolution face-on image of Dzhokhar Tsarnaev's photo into a database of 50,000 mug shots provided by the Pinellas County Sheriff's Department, they used the software-constructed image to check if they could get a match. Filtering results by "soft biometrics"—such as age range, gender, and race—Tsarnaev came up in the search as the 11th most likely match. Using similar approaches, the image was thrown at a database of 1 million mugs used in a National Institute of Standards and Technology facial recognition challenge in 2005; Tsarnaev was the 19th result.

Pwned by computers

Savvides said that his team is working on further reducing the potential errors that come with human input of "soft biometrics" by automating image classification at capture—deciding gender, age, and race data based on algorithms rather than human input to maintain consistency and improve searches.

CyLabs is working on even more sophisticated methods of visual biometric identification, too. Savvides said the team is constructing facial recognition systems that can make an identification based solely on eyes and eyebrows. Another area of research is how to take a view of half a face—such as a profile view—and create a full face-on view based on computer learning that guesses at the other half, including facial asymmetry.

Facial recognition systems are already becoming effective at assigning age and race; other systems use "multimode" biometrics such as gait and other pattern analysis to take fairly accurate guesses at nationality. But as NOVA's Miles O'Brien put it in a conversation with Ars, "faces aren't snowflakes"—they don't provide the sort of absolute identification that fingerprints and DNA do. Still, there is one visual biometric that does offer something close to the accuracy of fingerprints—iris recognition.

CyLabs has also been doing research into how to make it safer and easier for the military to collect biometric data. The lab has developed an infrared-based iris recognition system that can "enroll" individuals' iris patterns—which are as unique as fingerprints—from a distance of 6 to 11 meters away. Soldiers in Afghanistan currently carry handheld devices that require them to approach individuals at close range to scan irises.

Savvides said these developments could not only make it easier to identify suspects in the field, but could also save lives.

"The first thing you do when you get pulled over by police is you look in your rear view mirror," Savvides said. "If I can capture your irises from 5 meters away, that's huge. It could save hundreds of lives."

NOVA's O'Brien said that the technology he saw at Carnegie Mellon "pretty much knocked my socks off. This is when you suddenly realize that computers will own us. They can see things in 9 pixels that you and I would never in a million years pick up on. What it tells me is it won't be too long before this will become a more practical and routine part of policework."

The NOVA episode also looks at other computer vision technologies, including the "domain awareness" system installed in New York City's financial district—systems that watch for suspicious behavior, unattended packages, and other potential signals of a crime or terrorist attack. "I walked away surprised, intrigued, and also getting the creeps," O'Brien said. "Big Brother is alive and well."

While tonight's show doesn't address the social implications of surveillance, O'Brien said, "I think our country should stop and have a conversation about what we're doing in creating a surveillance state. We haven't talked about the long-range consequences."

NOVA airs on PBS at 9pm EDT on May 29.