Image: Getty

London’s Metropolitan Police believes that its artificial intelligence software will be up to the task of detecting images of child abuse in the next “two to three years.” But, in its current state, the system can’t tell the difference between a photo of a desert and a photo of a naked body.




The police force already leans on AI to help flag incriminating content on seized electronic devices, using custom image recognition software to scan for pictures of drugs, guns, and money. But when it comes to nudity, it’s unreliable.

“Sometimes it comes up with a desert and it thinks its an indecent image or pornography,” Mark Stokes, the department’s head of digital and electronics forensics, recently told The Telegraph. “For some reason, lots of people have screen-savers of deserts and it picks it up thinking it is skin colour.”


It makes sense why the department would want to offload the burden of searching through phones and computers for photos and videos of potential child pornography. Being regularly exposed to that type of content is mentally taxing, and offloading the labor to unfeeling machines sounds reasonable. But not only is the software in its current iteration unreliable, the consequences of relying on machines to flag and store this type of sensitive content are profoundly disconcerting.

Stokes told The Telegraph that the department is working with “Silicon Valley providers” to help train the AI to successfully scan for images of child abuse. But as we’ve seen, even the most powerful tech companies can’t seem to deploy algorithms that don’t fuck up every once in awhile. They have promoted dangerous misinformation, abetted racism, and accidentally censored bisexual content. And when Gizmodo recently tested an app intended to automatically identify explicit images in your camera roll, it flagged a photo of a dog, a donut, and a fully-clothed Grace Kelly.

Even when humans oversee automated systems, the results are imperfect. Last year, a Facebook moderator removed a Pulitzer-winning photograph of a naked young girl running away from the site of napalm attacks during the Vietnam War that was reportedly flagged by an algorithm. The company later restored the image and admitted that it was not child pornography.

Machines lack the ability to understand human nuances, and the department’s software has yet to prove that it can even successfully differentiate the human body from arid landscapes. And as we saw with the Pulitzer-winning photograph controversy, machines are also not great at understanding the severity or context of nude images of children.


Perhaps even more troubling is a plan to potentially relocate these images to major cloud service providers. According to Stokes, the department is considering moving the data flagged by machines to providers like Amazon, Google, or Microsoft, as opposed to its current locally-based data center. These companies have proven they are not immune to security breaches, making thousands of incriminating images susceptible to a leak.



[The Telegraph]