During last summer’s Champions League Final in Cardiff, Wales, South Wales Police began a facial recognition pilot program designed to check event-goers against a database of 500,000 images of persons of interest. Almost a year later, The Guardian reports that the pilot yielded 2,470 potential matches, of which, 2,297 were found to be “false positives.”

In a records request (via Wired), the South Wales Police revealed that at events such as the 2017 Champions League Final, the Automated Facial Recognition (AFR) ‘Locate’ system flagged 2,470 people — with only 173 positive matches. Figures from the report reveal that of the 2,685 alerts from 15 events, only 234 have been “True Positives”, with another 2,451 false positives. But in its press release, the SWP note they’ve made 2,000 positive matches and have used that information to make 450 arrests in the past nine months. We’ve reached out to the SWP to ask about the differences in numbers, and will update if we hear back.

AFR works by taking live feeds from CCTV cameras mounted at specific locations or on vehicles, and matches faces against a database of 500,000 images. In instances where the system flags someone, an officer will either disregard it, or will send officers to speak with the individual in question. “If an incorrect match has been made” the SWP explains, “officers will explain to the individual what has happened and invite them to see the equipment along with providing them with a Fair Processing Notice.” The force also says that there have been no arrests in the event of a false positive.

They explain that no facial recognition program is 100 percent accurate, and that technical issues “will continue to be a common problem for the foreseeable future.” The SWP also notes that a number of the false positives were the result of poor-quality images provided by other agencies.

Despite that high number of false positives results, the SWP say that the pilot has been a “resounding success,” and that the “overall effectiveness of facial recognition has been high.” But while the pilot has yielded some arrests (the SWP also note that they have been cognizant of the privacy risks), Wired cites privacy groups such as Big Brother Watch, which have criticized the technology as a “dangerously inaccurate policing tool,” and indicated that they will be launching a campaign against the technology next month in parliament.