The New York Times published an investigation recently into a little-known startup that helps law enforcement match photos of unknown people to their online images using facial recognition, and the privacy issues that are raised by the company and its massive database of photos.

The New York Times writes in an article titled “The Secretive Company That Might End Privacy as We Know It” that a new tech startup that has been working with law enforcement could raise serious privacy issues for members of the public.

Clearview AI, a facial recognition tech startup, has developed a system that allows users to upload a photo of a person to the app and see public photos of that person, along with links to where those photos appeared. The system scrapes information from Facebook, YouTube, Venmo and millions of other websites to help law enforcement track down individuals.

The New York Times writes:

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases. Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

The technology is reportedly being used by multiple law enforcement agencies amongst other groups, but many security experts have warned that the technology could easily be weaponized:

“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

The New York Times also notes that while investigating Clearview and finding little accurate public information about the company, Clearview began to investigate the Times reporter looking into the company:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for. Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Read more about Clearview and facial recognition technology at the New York Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com