For one, Clearview was searching nearly 3 billion public photos that included the politicians whose faces were part of the test, not arrest photos like the ACLU used. It didn't account for what would happen if someone wasn't in the database. Would it generate false positives, and would there be any bias in those false hits? On top of this, Clearview had the luxury of the clear, sometimes formal photos that often appear online. Its tool is supposed to be used in the real world, where lighting and image quality are frequently suboptimal -- it's not certain how well the facial recognition fares with grainy security camera footage.

The Surveillance Technology Oversight Project's Liz O'Sullivan also doubted Clearview's insistence that the accuracy applied to "all demographic groups," noting that 834 politicians wouldn't be representative of every ethnicity. Moreover, many of the people in the independent study panel didn't have direct proficiency with facial recognition, although one was the former head of Samsung's North American AI research.

Not surprisingly, Clearview chief Hoan Ton-That maintained that the results were acceptable. He insisted that Clearview used the ACLU's same methods, and that there was a "higher level of difficulty" as it used faces of politicians from California and Texas. He also argued that the test had looked at "every demographic group." Ton-That didn't really address the ACLU's criticisms, though, and Clearview eventually responded to an ACLU complaint by removing the group's name from the site. The company's accuracy hasn't been re-checked, then, and that's concerning when police across the US are relying on the technology to pinpoint suspects.