We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task.

But imagine that all that video were being watched — that millions of security guards were monitoring them all 24/7. Imagine this army is made up of guards who don’t need to be paid, who never get bored, who never sleep, who never miss a detail, and who have total recall for everything they’ve seen. Such an army of watchers could scrutinize every person they see for signs of “suspicious” behavior. With unlimited time and attention, they could also record details about all of the people they see — their clothing, their expressions and emotions, their body language, the people they are with and how they relate to them, and their every activity and motion.

That scenario may seem far-fetched, but it’s a world that may soon be arriving. The guards won’t be human, of course — they’ll be AI agents.

Today we’re publishing a report on a $3.2 billion industry building a technology known as “video analytics,” which is starting to augment surveillance cameras around the world and has the potential to turn them into just that kind of nightmarish army of unblinking watchers.

Using cutting-edge, deep learning-based AI, the science is moving so fast that early versions of this technology are already starting to enter our lives. Some of our cars now come equipped with dashboard cameras that can sound alarms when a driver starts to look drowsy. Doorbell cameras today can alert us when a person appears on our doorstep. Cashier-less stores use AI-enabled cameras that monitor customers and automatically charge them when they pick items off the shelf.

In the report, we looked at where this technology has been deployed, and what capabilities companies are claiming they can offer. We also reviewed scores of papers by computer vision scientists and other researchers to see what kinds of capabilities are being envisioned and developed. What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.

Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real time. It is as if a great surveillance machine has been growing up around us, but largely dumb and inert — and is now, in a meaningful sense, “waking up.”

%3Ciframe%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F1dDhqX3txf4%3Fautoplay%3D1%26autoplay%3D1%26version%3D3%22%20thumb%3D%22files%2Fweb19-robots-560x315.jpg%22%20width%3D%22560%22%3E%3C%2Fiframe%3E Privacy statement. This embed will serve content from youtube.com.

Computers are getting better and better, for example, at what is called simply “human action recognition.” AI training datasets include thousands of actions that computers are being taught to recognize — things such as putting a hat on, taking glasses off, reaching into a pocket, and drinking beer.

Researchers are also pushing to create AI technologies that are ever-better at “anomaly detection” (sounding alarms at people who are “unusual,” “abnormal,” “deviant,” or “atypical”), emotion recognition, the perception of our attributes, the understanding of the physical and social contexts of our behaviors, and wide-area tracking of the patterns of our movements.

Think about some of the implications of such techniques, especially when combined with other technologies like face recognition. For example, it’s not hard to imagine some future corrupt mayor saying to an aide, “Here’s a list of enemies of my administration. Have the cameras send us all instances of these people kissing another person, and the IDs of who they’re kissing.” Government and companies could use AI agents to track who is “suspicious” based on such things as clothing, posture, unusual characteristics or behavior, and emotions. People who stand out in some way and attract the attention of such ever-vigilant cameras could find themselves hassled, interrogated, expelled from stores, or worse.

Many or most of these technologies will be somewhere between unreliable and utterly bogus. Based on experience, however, that often won’t stop them from being deployed — and from hurting innocent people. And, like so many technologies, the weight of these new surveillance powers will inevitably fall hardest on the shoulders of those who are already disadvantaged: people of color, the poor, and those with unpopular political views.

We are still in the early days of a revolution in computer vision, and we don’t know how AI will progress, but we need to keep in mind that progress in artificial intelligence may end up being extremely rapid. We could, in the not-so-distant future, end up living under armies of computerized watchers with intelligence at or near human levels.

These AI watchers, if unchecked, are likely to proliferate in American life until they number in the billions, representing an extension of corporate and bureaucratic power into the tendrils of our lives, watching over each of us and constantly shaping our behavior. In some cases, they will prove beneficial, but there is also a serious risk that they will chill the freedom of American life, create oppressively extreme enforcement of petty rules, amplify existing power disparities, disproportionately increase the monitoring of disadvantaged groups and political protesters, and open up new forms of abuse.

Policymakers must contend with this technology’s enormous power. They should prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.

Read the full report here.