AI-powered cameras that can identify people and detect their movements are being installed by schools seeking the latest tools to try to prevent mass shootings. Despite good intentions, privacy concerns have been raised by civil liberties groups, as AI technology is not infallible with regards to detecting people and their behavior. Cameras can also be hacked.

Schools are looking into a new strategy for preventing mass shootings — including AI camera systems, known as real-time video analytics or intelligent video — that can identify people and “suspicious behavior” by amassing large amounts of data that helps the AI, over time, to detect mannerisms, gait, and dress, according to a report by the Los Angeles Times.

According to the report, the AI can stitch images of a person into a video narrative and show their immediate location, as well as where they have been and where they are going. If a threat were to be on school grounds, the camera would identify the gunman’s location and movements, so that police can take action more quickly.

“What we’re really looking for are those things that help us to identify things either before they occur or maybe right as they occur so that we can react a little faster,” said Paul Hildreth, the emergency operations coordinator for Fulton County Schools.

Some school districts, such as Broward County and Weld County School District 6 in Greeley, Colorado, have reportedly already been using some of these cameras — made by Avigilon — with Weld County planning to upgrade their cameras so that the AI will be able to identify firearms and read people’s facial expressions.

“It’s almost kind of scary,” said John Tait, the security manager for the school district in Weld County. “It will look at the expressions on people’s faces and their mannerisms and be able to tell if they look violent.”

The report added that police, businesses, stadiums, and more, have also already been using the AI cameras. Retailers, for example, use the technology to identify shoplifters, and some have even tested AI’s ability to read facial expressions as a means for determining whether people are having an enjoyable shopping experience, in order to improve customer service.

“The issue is personal autonomy and whether you’ll be able to go around walking in the public square or a shopping mall without tens, hundreds, thousands of people, companies and entities learning things about you,” said Joseph Jerome of the Center for Democracy and Technology, a non-profit organization that focused on individual rights and privacy protections.

Civil liberties groups — such as the ACLU — have also raised privacy concerns regarding the AI. An ACLU a senior policy analyst, Jay Stanley, has warned that the technology is still “pretty unreliable at recognizing the complexities of human life,” despite it getting stronger at reading other components.

“People haven’t really caught up to how broad and deep the technology can now go,” said Stanley. “When I explain it, people are pretty amazed and spooked.”

The report also acknowledged that AI technology is not infallible, citing a study from Wake Forest University conducted last year, which discovered that some facial-recognition software reads black faces as angrier than white faces.

Moreover, whenever technology is introduced — especially technology that relies on an Internet connection — it paves the way for hackers to intercept camera footage or manipulate the system.

Take, for example, using a “smart home” security device to lock your house. While updating your home with the latest locking device on the market might seem like a good idea, security experts warn that simply incorporating an Internet connection to a security device can make it more vulnerable to hackers, and thus, less secure than a traditional lock and key.

The same goes for cameras. In 2017, it was reported that as many as 200,000 webcams connected to WiFi were found to be vulnerable to hacking. In 2016, a mother was horrified to learn that the cameras she used to monitor her 8-year-old girls had been hacked, and that videos of her children “in their home, dressing, sleeping, playing,” were being livestreamed on the Internet.

The FBI has also warned parents against using “smart” toys connected to the Internet, stating that they “may contain parts or capabilities such as microphones, cameras, GPS, data storage and speech recognition that may disclose personal information.”

While preventing mass shootings may take precedence above other concerns, perhaps adopting more traditional security measures — such as hiring armed combat veterans, or eliminating dangerous and patently flawed “gun free zones” — can better protect children in schools across the United States.

You can follow Alana Mastrangelo on Twitter at @ARmastrangelo, on Parler at @alana, and on Instagram.