Clearview AI has built a database of billions of photos that it says can reveal just about anyone's true identity. But there are troubling questions about its past.

Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest. It’s a compelling pitch that has helped rocket Clearview to partnerships with police departments across the country. But there’s just one problem: The New York Police Department said that Clearview played no role in the case. As revealed to the world in a startling story in the New York Times this weekend, Clearview AI has crossed a boundary that no other tech company seemed willing to breach: building a database of what it claims to be more than 3 billion photos that can be used to identify a person in almost any situation. It’s raised fears that a much-hyped moment, when universal facial recognition could be deployed at a mass scale, is finally at hand. But the company, founded by CEO Hoan Ton-That, has drawn a veil over itself and its operations, misrepresenting its work to police departments across the nation, hiding several key facts about its origins, and downplaying its founders' previous connections to white nationalists and the far right. As it emerges from the shadows, Clearview is attempting to convince law enforcement that its facial recognition tool, which has been trained on photos scraped from Facebook, Instagram, LinkedIn, and other websites, is more accurate than any other on the market. However, emails, presentations, and flyers obtained by BuzzFeed News reveal that its claims to law enforcement agencies are impossible to verify — or flat-out wrong. For example, the pitch email about its role in catching an alleged terrorist, which BuzzFeed News obtained via a public records request last month, explained that when the suspect’s photo was “searched in Clearview,” its software linked the image to an online profile with the man’s name in less than five seconds. Clearview AI’s website also takes credit in a flashy promotional video, using the incident, in which a man allegedly placed rice cookers made to look like bombs, as one example among thousands in which the company assisted law enforcement. But the NYPD says this account is not true.

Obtained by BuzzFeed News Image from Clearview marketing materials sent to the police department in Bradenton, Florida, which claim its technology was used in recognizing the suspect.

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.” While Clearview has claimed associations with the country’s largest police department in at least two other cases, the spokesperson said “there is no institutional relationship” with the company. In response, Ton-That said the NYPD has been using Clearview on a demo basis for a number of months. He declined to provide any further details. In the Times report and in documents obtained by BuzzFeed News, Clearview AI said that its facial recognition software had been used by more than 600 police departments and government groups, including the FBI. But in at least two cases, BuzzFeed News found that the company suggested it was working with a police department simply because it had submitted a lead to a tip line.

“There has to be some personal or professional responsibility here. The consequences of a false positive is that someone goes to jail.”

Ton-That would not specify the exact number of paid police partnerships the company has. He declined to comment on his company’s claim to have worked with the NYPD to solve the subway terrorism case, telling BuzzFeed News, “Clearview was used by multiple agencies” to identify the suspect. The New York City branch of the state police and the Metropolitan Transit Authority have denied that their agencies were involved in the subway case. In response, Ton-That subsequently stated it was an unnamed federal agency. While it’s common for startups to make exaggerated claims, the stakes are much higher for a company building tools used by police to identify criminal suspects. “There has to be some personal or professional responsibility here,” said Liz O’Sullivan, an artificial intelligence researcher and the technology director at the Surveillance Technology Oversight Project. “The consequences of a false positive is that someone goes to jail.”

“We had ethical concerns” Originally known as Smartcheckr, Clearview was the result of an unlikely partnership between Ton-That, a small-time hacker turned serial app developer, and Richard Schwartz, a former adviser to then–New York mayor Rudy Giuliani. Ton-That told the Times that they met at a 2016 event at the Manhattan Institute, a conservative think tank, after which they decided to build a facial recognition company. While Ton-That has erased much of his online persona from that time period, old web accounts and posts uncovered by BuzzFeed News show that the 31-year-old developer was interested in far-right politics. In a partial archive of his Twitter account from early 2017, Ton-That wondered why all big US cities were liberal, while retweeting a mix of Breitbart writers, venture capitalists, and right-wing personalities. “In today's world, the ability to handle a public shaming / witch hunt is going to be a very important skill,” he tweeted in January 2017. Make more work like this possible: Become a BuzzFeed News member today. Those interactions didn’t just happen online. In June 2016, Mike Cernovich, a pro-Trump personality on Twitter who propagated the Pizzagate conspiracy, posted a photo of Ton-That at a meal with far-right provocateur Chuck Johnson with both of them making the OK sign with their hands, a gesture that has since become favored by right-wing trolls. “I was only making the Okay sign in the photo as in 'all okay,'” Ton-That said in an email. "It was completely innocuous and should not be construed as anything more than that. "I am of Asian decent [sic] and do not hold any discriminatory views towards any group or individual," he added. "I am devoting my professional life to creating a tool to help law enforcement solve heinous crimes and protect victims. It would be absurd and unfair for anyone to distort my views and values based on old photos of any sort.”

Screenshot via Twitter

By the election, Ton-That was on the Trump train, attending an election night event where he was photographed with Johnson and his former business partner Pax Dickinson. The following February, Smartcheckr LLC was registered in New York, with Ton-That telling the Times that he developed the image-scraping tools while Schwartz covered the operating costs. By August that year, they registered Clearview AI in Delaware, according to incorporation documents.

“It would be absurd and unfair for anyone to distort my views and values based on old photos of any sort.”

While there’s little left online about Smartcheckr, BuzzFeed News obtained and confirmed a document, first reported by the Times, in which the company claimed it could provide voter ad microtargeting and “extreme opposition research” to Paul Nehlen, a white nationalist who was running on an extremist platform to fill the Wisconsin congressional seat of the departing speaker of the House, Paul Ryan. A Smartcheckr contractor, Douglass Mackey, pitched the services to Nehlen. Mackey later became known for running the racist and highly influential Trump-boosting Twitter account Ricky Vaughn. Described by HuffPost as “Trump’s most influential white nationalist troll,” Mackey built a following of tens of thousands of users with a mix of far-right propaganda, racist tropes, and anti-Semitic cartoons. MIT’s Media Lab ranked Vaughn, who used multiple accounts to dodge several bans, as one of the top 150 influencers of the 2016 presidential election — ahead of NBC News and the Drudge Report. “An unauthorized proposal was sent to Mr. Nehlen,” Ton-That said. “We did not seek this work. Moreover, the technology described in the proposal did not even exist.”

A disagreement between Mackey and other far-right figures led to his outing as the owner of the Vaughn persona, sweeping Smartcheckr up in the fallout. In April 2018, a white nationalist blogger named Christopher Cantwell posted Smartcheckr’s pitch documents to Nehlen as well as information about Schwartz, inviting a torrent of abuse. "[Mackey] worked for 3 weeks as a consultant to Smartcheckr, which was the initial name of Clearview in its nascent days years ago," Ton-That said. "He was referred to me by a friend who is a liberal Democrat." Mackey did not respond to multiple requests for comment. When asked if the company knew about Mackey's Twitter persona, Ton-That responded, "Absolutely not." By summer 2018, Ton-That and Schwartz were working on Clearview AI and their image-scraping software had begun to take off. The company raised funding from billionaire venture capitalist and Facebook board member Peter Thiel and other investors, and Ton-That applied to XRC Labs, a New York–based startup accelerator focused on retail technology. Pano Anthos, the head of XRC Labs, told BuzzFeed News that Ton-That interviewed for a spot in an XRC Labs cohort, but Clearview wasn’t the “right fit” for the program because the company was “focused on security.” Ton-That confirmed to BuzzFeed News that the company applied to XRC Labs but did not go through with the program. Still, Clearview was briefly listed on some materials associated with XRC’s events and presentations. At one event, the company boasted of its “extremely accurate facial identification.” “In under a second it can find a match in our database of millions of photos,” read a now-deleted blurb about the company for a retail tech event. “It can be integrated in security cameras, iPhone/iPad apps, and with an API. Unlike other facial recognition companies, Clearview AI provides a curated database of millions (and soon billion) of faces from the open-web.”

Screenshot via LinkedIn A view of the ad history for Clearview's promotions on LinkedIn.

By the following year, the company left behind whatever aspirations it had for the retail industry and focused on relationships with law enforcement. Ton-That set up fake LinkedIn profiles to run ads about Clearview, boasting that police officers could search over 1 billion faces in less than a second. "It is possible that the company placed a few ads on LinkedIn," Ton-That said via email. In January, Ton-That’s name was listed as a speaker for the law enforcement conference ISS World North America, where he was scheduled to speak in September on panels about facial and image recognition, though his name was later removed. When contacted this summer, an event organizer declined to comment to BuzzFeed News about Clearview’s involvement and noted that the event was closed to reporters. While Clearview operated quietly with a bare-bones website and no social media presence, it tried to raise more than $10 million from venture investors. One potential person who met with the company said they were introduced by Naval Ravikant, a Clearview backer who previously employed Ton-That at AngelList, the angel investing network that Ravikant cofounded. The investor who took the pitch told BuzzFeed News that Clearview’s demos were slick, with Ton-That taking photos of people in the room and using his tool to find images of them from around the web. And while he compared the software to a Google search for people’s faces, Clearview’s CEO shied away from explaining how those images had been collected, according to the investor. Ultimately, they did not write a check. “We had clear ethical concerns,” the person said.

Dubious marketing claims As Clearview has grown, it’s relied on dubious marketing claims to some of the largest police departments in the nation. Last summer, during a procurement process for facial recognition technology, a law firm representing Clearview AI sent the Atlanta Police Department a flyer touting the startup’s “proprietary image database” and the “world’s best facial-recognition technology.” Claiming that the company’s “mountains” of data were its “secret sauce,” the document, which was obtained through a BuzzFeed News public records request, claimed that Clearview played a crucial role in the capture of a suspect in an alleged assault. Got a tip? Email one of the reporters of this story at Caroline.Haskins@buzzfeed.com or Ryan.Mac@buzzfeed.com, or contact us here. “On September 24, 2018, The Gothamist published a photo of a man who assaulted two individuals outside a bar in Brooklyn, NY,” read the flyer. “Using Clearview, the assailant was instantly identified from a large-scale, curated image database and the tip was delivered to the police, who confirmed his identity.”

While the NYPD did distribute a photo of the suspect, who eventually turned himself in, a spokesperson for the department denied that Clearview played a role in its investigation. Similarly, Clearview did not help in an alleged groping in December 2018 on the New York City subway. In that incident, a woman took a photo of the alleged assailant, which was published in the city’s newspapers. Clearview claims that it ran the picture, discovered the identity of the suspect, and “sent the tip to the NYPD,” after which the suspect was “soon apprehended.” It also claimed that police were then able “to solve 40 cold cases” within a matter of weeks. In both cases, Clearview responded that it submitted the photos via a tipline. “We ran the photo as a test of our early system and sent in an official tip program with no mention of Clearview,” Ton-That said in an email. “Clearview no longer conduct such tests.” The NYPD, however, said that it “did not use Clearview to identify the suspects in these cases.” Similarly, the New York Daily News reported the suspect was arrested after officials received a tip from a community anti-crime group, Guardian Angels, whose founder, Curtis Sliwa, “hand-delivered the information to police in the Columbus Circle subway station.” In a call with BuzzFeed News, Sliwa said his group received the tip from an acquaintance of the suspect.

It’s unclear if the Atlanta police dug into Clearview’s marketing claims, but it signed an agreement with the facial recognition startup in September. That $6,000 deal gave the department three licenses, each of which lasted a year, to use Clearview’s software, which was an order of magnitude cheaper than the other bidders, including a $42,000 system from Veritone and a five-year contract for NEC’s NeoFace WideNet that cost $75,000 per year.

A spokesperson for Atlanta police told BuzzFeed News that the department has “been pleased with what we have seen so far.” They did not answer questions about Clearview’s marketing tactics. “We have cautioned our investigators that simply matching a photo through the software does not meet the requirements for the probable cause needed to make an arrest,” the spokesperson said. “Investigators must then do further work to link the suspect to the crime.”

“Clearview AI is neither designed nor intended to be used as a single-source system for establishing the identity of an individual.”

As it signed deals, Clearview continued to misrepresent its relationship with the NYPD. It used images of the suspect from the Brooklyn bar beating in an October email sent through CrimeDex, a crime alert listserv used by police across the nation. In that email, which BuzzFeed News obtained via a public records request to the Bradenton, Florida, police department, a random man whose image was taken from an Argentine LinkedIn page is identified as a “possible match.” His name, however, does not match the name of the person who turned himself in to the NYPD. “Clearview AI is neither designed nor intended to be used as a single-source system for establishing the identity of an individual,” Ton-That said. Though the company claimed to the Times and in marketing emails that it’s used by more than 600 police departments, it’s not clear how many of those are paying customers. Using government contract database GovSpend, BuzzFeed News identified 12 police department deals with Clearview, including a $15,000 set of subscriptions to Clearview by the New York State Police; $15,000 from Broward County, Florida; and $10,000 from Gainesville, Florida. BuzzFeed News also identified other proposed contracts with the cities of Antioch, California; Green Bay, Wisconsin; and Davie, Florida. When asked about these contracts, Ton-That said in an email, “We do not discuss our clients.” Ton-That declined to say how many of Clearview’s customers were on paid contracts. Given this, it’s possible that most cities are on free trials, like a 30-day test that police in Tampa, Florida, recently took. The department told the Orlando Sentinel last month that it had no plans to purchase the software. Ton-That declined to comment on specific law enforcement partnerships.

“World-class accuracy” BuzzFeed News also uncovered several inconsistencies in what Clearview tells police departments about its software. “Clearview’s speed and accuracy is unsurpassed,” claimed marketing material Clearview AI gave the Atlanta Police Department. “Clearview puts the world’s most advanced facial-recognition technology and largest image database into their hands, allowing them to turn a photograph into a solid lead in an instant.” In marketing materials to Atlanta police, Clearview claimed that it could accurately find a match 98.6% of the time in a test of 1 million faces. A chart compared Clearview’s supposed score to an 83.3% accuracy rate from Tencent, and 70.4% from Google.

Atlanta Police Department

But the publicly available results from the University of Washington’s MegaFace test — a widely used but criticized facial recognition benchmark — do not show Clearview, though there are listings for Tencent and Google algorithms. When asked, Ton-That did not say if the company ever submitted its results to a third party, only noting that the company had reached an even higher accuracy rate of 99.6% while testing internally. He did not provide evidence. A MegaFace representative told BuzzFeed News that it’s possible for a company like Clearview to download its dataset to test its software without submitting its results for verification. They added that Clearview AI’s accuracy metric has not been validated by MegaFace. As of Monday, Clearview’s website had a new FAQ section that stated, “An independent panel of experts rated Clearview 100% accurate across all demographic groups according to the ACLU's facial recognition accuracy methodology.” Ton-That declined to provide details to BuzzFeed News, only noting that it included “a top AI expert” and “a former Democratic NY state judge.” Clare Garvie, a senior associate at Georgetown Law's Center on Privacy and Technology, told BuzzFeed News it was unclear whether Clearview could do what it says it could. “We have no idea how good it is,” Garvie said. “The idea that all information, all people's faces online are currently tagged with their own identity — it's a bit laughable.” Garvie told BuzzFeed News that there’s also no single way to measure the so-called accuracy of facial recognition technology. Accuracy, in facial recognition, is generally measured as a combination of the correct-match rate, reject rate, non-match rate, false-match rate, and the ability to detect the face in the first instance. “Whenever a company just lists one accuracy metric, that is necessarily an incomplete view of the accuracy of their system,” Garvie said. “Depending on what the system is designed to do, that may have little or no bearing on the actual accuracy of the system and operation.”

Clearview in the wild Despite these concerns, Clearview is being deployed in law enforcement investigations. Marketing materials that BuzzFeed News obtained from the Bradenton, Florida, police department show that the software has been used to target sex workers. It’s also been used to identify suspects from group photos, LinkedIn images, and bank security camera footage. One CrimeDex email said that Clearview was used in the arrest of an alleged pimp, who employed a sex worker advertising “sexual services for prostitution online.” According to the email, Clearview ran an image of the woman’s ad through its software and found her Venmo and Instagram accounts. Through her Instagram handle, police linked her to the alleged procurer on social media, finding his mugshot from a previous arrest.

Obtained by BuzzFeed News Examples from Clearview's marketing materials.

The end of the email included a link for a trial. “Clearview is available to all law enforcement officers to trial for free with no strings attached,” the email said. “Just click the link below.” CrimeDex has also used Clearview AI to alert law enforcement to possible suspects in ongoing criminal investigations. One email from November shows a photo of a man captured by a security camera at a bank, and a mugshot that CrimeDex claimed was obtained by running the image through Clearview. Take a look at some of the documents obtained by BuzzFeed News in the reporting of this story here. The email included the images of the suspect, who allegedly cashed a fraudulent check at a bank, as well as his name, address, height, and other personal information. “We ran the images through Clearview and found a possible suspect,” the email read, before encouraging officers to keep an eye out and to contact an investigator at SunTrust Bank for more information. When asked about this case, Ton-That said in an email, "We do not discuss our cases with the public." That investigator did not return calls from BuzzFeed News requesting comment. When asked about private company relationships, Ton-That, who said the company pays for marketing on CrimeDex, noted that Clearview has “a handful of private companies who use it for security purposes.”

Obtained by BuzzFeed News Marketing materials from Clearview