A controversial new software developed by Japanese startup Vaak could be used to identify potential shoplifters based on their body language.

The system is trained to recognize ‘suspicious’ activities such as fidgeting or restlessness in security footage, according to Bloomberg Quint.

While it’s designed to crack down on theft, with the idea being that staff can approach a potential thief once alerted, predictive policing efforts have sparked concerns that people may be unfairly targeted as a result of racial and other biases.

Scroll down for video

A controversial new software developed by Japanese startup Vaak could be used to identify potential shoplifters based on their body language. The system is trained to recognize ‘suspicious’ activities such as fidgeting or restlessness in security footage

Vaak’s criminal-detecting AI can alert staff to suspicious behaviour via smartphone app once it’s spotted something in the CCTV stream, according to Bloomberg.

The Minority Report-style system was used last year to successfully track down a person who had shoplifted from a convenience store in Yokohama.

Ideally, however, the startup is aiming for its technology to be a preventative approach.

Vaak says its AI can distinguish between normal customer behaviour and ‘criminal behaviour,’ such as tucking a product away into a jacket without paying.

But, it can also detect what could be the warning signs of a theft before it actually happens.

In this case, staff could be alerted and sent over to approach that person in hopes to thwart stealing by asking if they need help, according to Bloomberg.

Vaak is now testing in dozens of stores around Tokyo, and says the technology could be expanded to include applications outside of crime prediction, including video-based checkout systems.

Predictive policing technology has grown in recent years, with secretive trials in China and even some parts of the US.

In 2018, it was revealed that controversial Silicon Valley startup Palantir has been working with the New Orleans Police Department to test a system that predicts where crimes are more likely to occur, and who is most likely to commit them.

But, experts warn these algorithms will suffer biases as a result of their training data.

While it’s designed to crack down on theft, with the idea being that staff can approach a potential thief once alerted, predictive policing efforts have sparked concerns that people may be unfairly targeted as a result of racial and other biases

An MIT study published this past fall found that many popular AI systems exhibit racist and sexist leanings.

Researchers have urged others to use better data to ensure biases are eliminated.

'Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,' said lead author Irene Chen, a PhD student, when the study was published in November.

'But algorithms are only as good as the data they're using, and our research shows that you can often make a bigger difference with better data.'