AI that could thwart illegal activity by identifying criminals before they act is set to be rolled out in India.

The aim of the Minority Report-style CCTV surveillance system is to prevent offences such as sexual assault by looking at the body language of people to predict what they are about to do.

An Israeli security and AI research company will soon use AI to analyse the terabytes of data streamed from CCTV cameras in public areas in India.

Scroll down for video

Crime-predicting AI that could thwart illegal activity by identifying it before it happens is rolling out in India. Pictured is a display showing a facial recognition system for law enforcement

The partnership has been formed between Tel Aviv-based company Cortica Best Group in India, according to Digital Trends.

The company will analyse 'behavioural anomalies' to spot when someone might be about to commit a crime.

This kind of citizen-monitoring technology is already in use in 40 local governments in China.

The technology will monitor individuals by looking for small twitches that might mean they are about to do something illegal.

Co-founder and COO of Cortica, Karina Odinaev, said the technology could identify movements often overlooked by security teams, potentially making cities safer.

She said 'unsupervised learning' is required for the software to learn what to spot, which is why they want to train it on security cameras.

Like humans, this technology needs to see something multiple times before it learns to recognise key warning signs.

If the system makes a mistake, programmers can find out which file was responsible for the dodgy call and re-teach it.

The technology could be used for different types of surveillance and could also monitor passenger behaviour using footage obtained from drones and satellites.

For example in self-driving taxis the system could detect if someone might be about to assault another person.

With crowds, it could also monitor when a situation might be about to turn potentially dangerous.

Based on a novel by science fiction writer Philip K. Dick, Minority Report, starring Tom Cruise (pictured) is a thriller set in 2054 when police use psychic technology to arrest and convict people before they commit crimes

Law enforcement in large cities like London and New York already use video surveillance for facial recognition and to analyse license plates.

The New Orleans mayor Mitch Landrieu has proposed a crime-fighting surveillance plan using municipal cameras and webcams belonging to businesses.

However, immigrant workers have raised concern the system could be used to hunt down undocumented workers.

China is already using similar facial recognition CCTV technology.

SenseTime, which works in partnership with Honda on automated vehicle research, is in use across the country.

The technology, based in Hong Kong, can accurately identify an individual recorded on surveillance cameras for crime and policing.

It monitors individuals on government blacklists, particularly during festivals and other public events as well as in airports.

WHAT ARE PREDICTIVE POLICING SYSTEMS? Predictive policing systems can forecast when and where crimes occur using based on prior crime reports and other data. Palantir Technologies has licensed its predictive policing software with local and international governments. Most ingest vast amounts of data, including geography, criminal records, the weather and social media records. From that, it makes predictions about individuals or places that are likely to be involved in a crime, according to the Verge. There are other predictive policing systems out there that are being utilized, many of them are different. The Los Angeles Police Department, New York Police Department, Chicago Police Department and, now, the New Orleans Police Department use predictive policing. File photo Chicago's police department uses a notorious 'heat list,' which is an algorithm-generated list that singles out people who are most likely to be involved in a shooting. However, many experts have identified issues with Chicago's heat list. The government-funded RAND Corporation published a report saying that the heat list wasn't nearly as effective as a standard wanted list. It could also encourage a new form of profiling that draws unnecessary police attention to people. Another academic study found that the heat list can have a 'disparate impact' on poor communities of color. A California startup called PredPol also built predictive policing software that's been utilized by law enforcement officials, including the LAPD. In 2016, researchers conducted a study where they reverse engineered PredPol's algorithm and discovered that it replicated systemic bias against communities of color that were over policed. It also found that historic data isn't a good indicator of future criminal activity. The NYPD also had an agreement with Palantir Technologies to use its predictive policing systems. Advertisement

SenseTime raised a record $600 million (£420 m) in the latest funding round, hot on the heels of a government push to make China an international leader in AI by 2025.

The company did not disclose its total valuation, but said it is now the world's most valuable AI platform. It also said it set a world record for the amount raised in a single funding round by an AI firm.

It is now more valuable than London-based competitor DeepMind, which was bought by Google for an estimated $565 million (£400m) in 2014.

Alibaba executive vice chairman Joe Tsai said in the statement: 'We are especially impressed by their R&D capabilities in deep learning and visual computing.

'Our business at Alibaba is already seeing tangible benefits from our investments in AI and we are committed to further investment.'