By Nicholas West

As I’ve been covering for the last several months, we are beginning to see more examples of artificial intelligence being relied upon within the U.S. criminal justice system. If recent accusations are true, this trend in surveillance and pre-crime algorithmic analysis is even more dangerous than the intrusion upon our privacy — it can lead to false arrests.

As reported by the BBC, a case of mistaken identity may have resulted from what would be a comedy of errors, if not for such serious consequences for the accused. It perfectly highlights the pitfalls of relying upon such complex systems when dealing with human beings and their range of interactions.

Save Independent Media for as Little as $1 Per Month

A student is suing Apple Inc for $1bn (£0.77bn), claiming that its in-store AI led to his mistaken arrest. Ousmane Bah, 18, said he was accused of stealing from Apple Stores in four US states, and arrested at his home in New York last autumn. He believes Apple’s algorithms linked video footage of the thief with his name, leading to the charges. Apple has told the BBC that it does not use facial recognition technology in its stores. Mr Bah claims that a detective reviewed security footage from the time of one of the crimes and found the thief looked “nothing like” him. […] Mr Bah believes that Apple’s algorithms are now trained to connect his name to images of the thief. […] Mr Bah claims that travelling to different states to respond to charges filed against him has affected his college attendance, and his grades have suffered as a result.Apple’s Face ID technology caused a stir when it was launched on the iPhone X in 2017, with commentators concerned that users’ biometric data could be hacked if they used the feature.

Despite the fact that security breaches and tech failings of all sorts plague the news on a daily basis, police and governments continue to rollout this technology into the public. The UK has continued to test A.I. programs on the public even though it has been proven to be “staggeringly inaccurate.” In the U.S., the very nature of the 1st, 4th and 5th Amendments to the Constitution appears to be at stake, yet there is a startling lack of seriousness attached to this topic. These stories are often presented as anomalies or even as amusing anecdotes.

Not only is data being scooped up in a massive dragnet through government systems with little oversight to begin with, but private companies like Google and Amazon are having this data sought by law enforcement even from within private property through Alexa, Ring, Nest and other smart tech. You can read some of those horrifying stories here and here.

And, yet, the U.S. Government Accountability Office recently issued a comprehensive warning about inaccurate facial recognition systems being integrated into nationwide databases, which you can read here in the section entitled, “Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy.”

Maybe a series of lawsuits like this one will have to be what it takes to get people’s attention that the American justice system is under threat of being rewritten to accept direction from artificial intelligence that currently is anything but intelligent enough for such a serious task.

Nicholas West writes for Activist Post. Support us at Patreon for as little as $1 per month. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Provide, protect and profit from what is coming! Get a free issue of Counter Markets today.

Image credit: Kaspersky