The Severe Consequences of Facial Recognition

The rapid inception of facial recognition must be opposed at all costs. Here is why.

With the rise of state and corporate digital surveillance and the Snowden revelations in 2013, many people have taken a very pro-privacy stance on the issues regarding digital control over data, global mass surveillance, and more recently, facial recognition.

So, what is facial recognition? Facial recognition is precisely what it sounds like: technology that recognizes your face based on facial characteristics such as skin tone, facial hair, and biometric data. Even though it seems rather simple (and it is), there are many concerns to be made with the notion of introducing it into the lives of human beings on a mass societal scale.

Before I delve into the critiques of facial recognition and the consequences that can come alongside its introduction, I would like to take a closer look at the recent history of the technology. To do this, we need to take a look at two particular entities:

IBM and the New York Police Department (NYPD).

After the attacks on September 11, 2001, the Western world, and more particularly, the United States, was plunged into a sense of uncertainty. This was primarily because the attacks shattered any ideas or notions that a foreign state or non-state actor could not attack the United States.

In the months following the attacks that occurred, then-President George W. Bush signed the PATRIOT ACT into law and began building the road that would eventually amount to large scale global surveillance through the United States intelligence agencies and other allied countries.

Photo by Matthew Henry on Unsplash

According to The Intercept and “confidential corporate documents”, it has been made evident “that IBM began developing this object identification technology” with “secret access to NYPD camera footage”. Also stated by The Intercept was that by 2012, “IBM was creating new search features that allow other police departments to search camera footage for images of people by hair color, facial hair, and skin tone”.

To the average person, this may just seem as a methodology to corner suspected criminals into smaller factors of identification, but when factoring in the severe racial bias of the various police forces in the United States, it cannot amount to anything that means well.

The Intercept also reported the fact that IBM did not respond to many of the questions posed, including but not limited to the following:

Why wasn’t the collaboration with the NYPD made public?

What is the status of the current availability of video analytics?

What was the video footage provided by the NYPD used for?

Facial Recognition is Not Functional:

According to The Independent, while Metropolitan Police in London were using “facial recognition software”, the results that came back outlined the fact that a whopping “98 per cent of alerts generated” were false positives, thus rendering the software “not yet fit for use”.

The Independent also reported that the South Wales Police had a total of “2,400 false positives in 15 deployments” ever since the software started being utilized as of June 2017.

Photo by Oliver Hale on Unsplash

These massive inaccuracies outline the development that would be necessary for recognition software to be potentially usable in the future. The errors reported can result in very devastating impacts in the real-world, including but not limited to false accusations, state harassment, and more.

Invasion and Violation of Privacy Rights:

It has been seen time and time again that governmental agencies have overstepped legal grounds. Examples include the National Security Agency (NSA) breaking privacy laws, particularly in regards to “unauthorized surveillance of Americans or foreign intelligence targets” within American borders.

Even after the massive overload of privacy advocacy in the post-Snowden era and the passing of the Freedom Act, an act that was supposed to help curb the unconstitutional surveillance going on within the United States, things haven’t changed. In 2016, The Verge reported that the NSA “collected more than 151 million records about Americans’ phone calls”, which prompted people and organizations such as the Electronic Frontier Foundation (EFF) to start pushing further in asking for the removal of the program itself.

Photo by Scott Webb on Unsplash

If the privacy of millions of Americans has been knowingly invaded, why should people expect facial recognition technologies to be used differently? It has been shown and proven time and time again that state agencies are not willing to properly go through the legal methods available to gain access to private information, so facial recognition should not be treated in any other way.

Conclusion:

There are many other issues beyond the lack of functionality and the invasion of privacy, such as the threat of misidentifying minority groups and activists, as well as the lack of safeguards that intelligence organizations have against other state or non-state actors.

In its current state, facial recognition is not a technology that can be used by various state agencies and should most definitely be opposed in most cases to prevent the many shortcomings that it has. If people cared about the private work done by academic researchers, activists groups, investigative journalists, and other individuals who need privacy to conduct work safely, then those same people should very much be against the further implementation of biometric data collection.