Unchecked Facial Recognition Programs Could Severely Erode Privacy When Coupled with Police Body Cameras

Posted on March 27, 2017

UPDATE: In an announcement last week, Taser launched a broad, ambitious promotional campaign to equip local law enforcement around the country with body cameras by providing free body cameras to any police department interested in acquiring them. The promotion, concurrent with the unveiling of change in company name from Taser to Axon, would also grant participating departments with a yearlong free trial of the company’s body camera footage cloud storage platform, evidence.com, in the hopes of motivating departments to move to its subscription model encouraging 5-year contracts. Not only will this push to extensively integrate private sector actors into policing very likely touch off a dramatic expansion of facial recognition-capable body camera use in US police forces, but it also presents the distinct potential for serious harm through mismanagement and abuse.

To start with, cloud computing technology is among the most susceptible to data breaches, especially when developed and operated by companies and personnel who are not specialized in cloud security. Rather than adopting the relatively secure approach of transferring footage via USB cable to a local (i.e. not connected to the internet) police database when each officer clocks out and turns in their body camera, Axon’s model involves sending footage files over the internet to a centralized server farm operated by the company. As all internet activity carries inherent security risks, and police body camera footage makes for a particularly lucrative target, it is all but inevitable that malicious actors will intercept body camera footage at some point, especially since Axon, as primarily a weapons manufacturer, does not command particular expertise in network and information security.

Beyond the risk of negligent management, concentrating the nation’s police body camera footage in the hands of a single company makes for all too tantalizing a temptation for Axon to further monetize the data by subjecting it to data mining and sale to other private sector interests. There are already companies attempting to identify correlations between faces and consumer behavior, so the interest to such companies in the rich datasets that would ostensibly be captured from a whole police force worth of body cameras, to say nothing of a whole nation’s worth of them, is obvious. If steps are not swiftly taken to regulate the handling of body camera footage, such as by prohibiting or strictly limiting its sale or exchange for monetization, there is no telling how wholly unaccountable parties might use and abuse it.

Augmenting body cameras with facial recognition is not a hypothetical or a research and development target–it is already available in several body camera products.

Biometric data is weakly regulated in US police forces, meaning there might be nothing standing in the way of law enforcement integrating it into existing body cameras.

Current facial recognition technologies are just accurate enough to pick out faces from low-resolution images, but not accurate enough to ensure proper identification.

A new report by the Intercept warns that the unsettlingly realistic prospect of facial recognition programs being deployed as an augmentation to police body cameras could have grave repercussions on the privacy of citizens. What makes the intrusiveness of this potential policing technology particularly serious is the combination of lax existing biometric data policies in police forces and nascent, error-prone facial identification software.

To begin with, the piece reveals that the technical capacity for integrating facial recognition programs into body cameras already exists. Almost a quarter of body camera manufacturers offer facial detection and identification capabilities with their products, according to a joint study by the Department of Justice and Johns Hopkins University. Correspondingly, new facial recognition algorithms, such as that of the startup compnay NTechLab, are able to be installed to body cameras. Even more concerning, this new generation of facial recognition products offers a quantum leap in capabilities, such as, in addition to real-time processing, the ability fetch location history, criminal history, and immigration status.

While no US law enforcement agencies have so far been confirmed to have paired body cameras with facial recognition software, body cameras are already prevalent in major police departments around the country, with facial recognition not far behind. At least five US departments have bought or are considering buying facial recognition software for existing, traditional surveillance camera networks—the Chicago Police Department already employs such technology . Integrating facial recognition into even a fraction of the police departments or agencies which deploy body cameras, which make up about a third of all US local law enforcement, would be the equivalent of setting up millions of facial recognition surveillance cameras. And these cameras would not have to be replaced or overhauled to accommodate this feature, but simply have their software updated to run the appropriate algorithm.

With scant or nonexistent policies surrounding the use and retention of biometric data, the potential for overreach or abuse is considerable. Very few police departments have regulations on how biometric data is deployed or retained, which means there is currently little standing in the way of combining facial recognition with body cam footage. Moreover, half of all US adults already have their face in a database that is accessible by federal agencies such as the FBI, Department of Defense, DEA, and ICE, as many states provide not only mugshots for convicted criminals but driver’s license photos for inclusion in federal government databases.

In spite of the fact that an evaluation by the National Institute for Standards and Technology showed that no currently available facial recognition algorithms or programs are yet at optimal levels, they are already employed. According to data provided on the FBI Next Generation Identification program, probable potential matches only came up 5% of the time. This figure does not account for the reduced accuracy when analyzing the faces of racial minorities, which diminishes by as much as 10%. This bias, or any other of a number of possible defects, would be difficult to address, as most (if not all) current facial recognition algorithms are proprietary, meaning they are not openly available for audit or review.

In fact, facial recognition stands at an especially precarious crossroads, because while the current crop of programs are not accurate to prevent a significant number of mismatches, they are nonetheless sophisticated enough, or will be soon, to cast a very wide net, as they may be able to pick out and analyze faces from very low-resolution images. In a talk at the the Chaos Communication Congress in December, a presenter demonstrated that there are enough possible grayscale color combinations in a 6 pixel by 7 pixel image to conduct facial recognition analysis. Additionally, Google recently developed a technique by which low-resolution images can be enhanced to produce a higher definition version of the image by reversing an image’s compression algorithm. As each pixel of the original grainy image represents an average color value of the cluster of pixels of a hypothetically more defined image, a reversing of the image’s compression function can derive the constituent color values from the colro average and distribute them to the proper mapping. As time goes on, it is only increasingly likely that techniques such as these make their way into facial recognition programs.

The pervasiveness of facial recognition technology in policing already, and its imminent proliferation to body cameras, combined with glaring inaccuracies in identification already poses a substantial risk of mistakenly charging, or even convicting, innocent individuals for crimes they did not commit. These were exactly the pressing concerns raised at last week’s House Oversight Committee hearing by representatives and civil liberties defenders alike, chief among them that to prevent abuses or miscarriages of justice, government regulation on this rapidly evolving technology is sorely needed. Less obvious is the danger that facial recognition-enabled body cameras represents to the free exercise of First Amendment rights. Privacy advocates worry that augmenting body cams with facial recognition could chill First Amendment activity, as every officer patrolling a demonstration could be scanning protesters’ faces and, thus, confirming their identities and attaching them to a particular First Amendment-protected activity or political view. Black Lives Matter protesters are already the subject of ongoing surveillance in several major cities and metro areas, so the threat to the First Amendment is very real. For this reason, defenders of civil liberties are urging government from the federal to local levels to enact privacy protections for biometric data.

You can find the full report from the Intercept here.

Jonathan Terrasi has been a Research Assistant with the Chicago Committee to Defend the Bill of Rights since January 2017. His interests include computer security, encryption, history, and philosophy. In his writing, he regularly covers topics on current affairs and political developments, as well as technical analyses and guides on security issues, published on his blog, Cymatic Scanning, and Linux Insider.