The software will learn the way humans do, through experiences rather than hard wiring.

What would it be like if we could stop crimes before they even happened? If we could cut the crime rate without even lifting a finger? Though it may seem like something out of a science-fiction-horror movie (a-la Black Mirror) the reality is much closer than it seems. In fact, it’s ready for its debut in India.

The crime-predicting autonomous Artificial Intelligence software is the brainchild of Tel Aviv-based software company Cortica. Based on military-grade security systems, Cortica’s software is intended for police use, to prevent violent crimes like rapes, attacks, or muggings.

Worn as part of a body cam or as part of a security camera, the software will monitor human beings in real-time, and collect information on them based on their behaviors. It will also monitor micro-expressions, the near-undetectable facial expressions, twitches, or mannerisms that can reveal a person’s intentions.

The software will be able to combine data from video cameras, drones, and satellites, and learn and judge behavior differences between both individual humans and large groups of people.

The program came about through research on segments of rat brains. The software is based on the electrical signals and reaction to stimuli that an ex vivo section of a rat’s brain experienced. This particular kind of research helped simulate the original process of the brain and replicate them.

At a meeting in Tel Aviv, Cortica co-founder and COO Karina Odinaev explained the basis of Cortica’s inner workings. The software, she said, will learn the same way that humans learn, through experience rather than instruction. Rather than how most AI systems work, which is via “deep learning” networks that hardwire information into the system, the Cortica AI system will be able to pick up on new stimuli, form appropriate reactions to them, and store those new correct reactions for the future.

If you think this all sounds a little Minority Report-ish, you’re not alone. Many critics and curious bystanders have expressed concerns over the software, and its ability to function without error. Cortica provides some peace of mind, though they leave a lot of things unanswered.

According to Cortica, if the system does make a mistake – it gives the example of falsely anticipating a car pulling out of a driveway – the programmers will be able to trace the individual file responsible for the judgment, and fix it. Of course, the software is not functioning on its own, and human intervention, at least for now, is still the first line of defense.

While this may seem like groundbreaking technology, it’s really just an expansion of what’s already being used. The military uses facial recognition software to pick out suspected terrorists, and multiple cities around the country employ video surveillance to monitor license plates and personal information.

Next, read about what the world’s greatest minds think about artificial intelligence. Then, check out the Atlas Robot, who can almost think for himself.