Correction: The original version of this article incorrectly stated that the Campus Safety Alliance held a town hall Wednesday. In fact, the Community Programs Office held the town hall.

This post was updated Jan. 30 at 1:57 p.m.

2020 is looking more and more like “1984.”

That is, UCLA is watching you.

New policies for the implementation of facial recognition technology were introduced in drafted revisions to UCLA Interim Policy 133, which governs the university’s management and use of security camera systems. The Campus Safety Alliance first received the draft from campus administrators in December, and the Community Programs Office held a town hall Wednesday for students to air their concerns over the proposed change.

This won’t be the first time Policy 133 has been criticized over privacy concerns.

When UCLA first proposed the revised policy in September 2018, students voiced concerns about its plans to centralize on-campus security camera systems and give university police access to footage during emergencies. Unsurprisingly, concerns about the administration’s failure to sufficiently solicit student input also came up.

But rather than learning from students’ reactions to campus surveillance, UCLA seems to have jumped off the deep end by taking a page out of Big Brother’s playbook.

The implementation of facial recognition technology would present a major breach of students’ privacy and make students feel unsafe on a campus they are supposed to call home. It is one thing to monitor campus activity with security cameras, but it’s another entirely to automatically identify individuals and track their every move on campus.

And spending time on campus should not make students subject to university overreach.

Any security measure that creates a more hostile campus environment is counterproductive, and an institution the size of UCLA should not even consider engaging in a practice that involves collecting invasive amounts of data on its population of over 45,000 students and 50,000 employees.

And the issue goes beyond privacy.

Facial recognition technology is hardly 100% precise, and factors such as image quality and environmental conditions can all significantly reduce accuracy. To make matters worse, many facial recognition systems are less adept at identifying and differentiating people of color and women, leading to higher chances of misidentification and racial profiling, according to a study on facial recognition software from the National Institute of Standards and Technology.

For students belonging to these groups, facial recognition technology would simply reinforce the biases that are already stacked against them. It is for these very reasons that several advocacy groups have launched an organized campaign to ban the use of facial recognition software at all U.S. colleges.

Concerns over racial bias and profiling have even led cities such as San Francisco and Oakland to completely ban the use of facial profiling technology by their governments and police. Considering the diversity of UCLA’s student population – which includes minority and undocumented students – UCLA should follow in these cities’ footsteps and refrain from subjecting its students to the dangers of this highly controversial technology.

Of course, campus safety should be a top priority, and the board has previously called on the university to continue searching for new ways of protecting its students. But this method isn’t the answer. The dangers of facial recognition technology largely outweigh any of its contributions to campus safety and would create an enormous violation of students’ right to privacy if implemented.

Administrators have clearly been reading too many dystopian novels from the stacks at Powell Library.

But in 2020, Orwell should stay on the bookshelves.