Law enforcement and security forces around the world, as well as operators of airports and commercial buildings, are racing headlong for bragging rights for having deployed facial recognition technology to try to spot bad guys and confirm IDs in all kinds of situations. From the FBI to ICE to several big-city PDs, agents have been caught trolling through drivers-licence files with software designed to identify illegal immigrants, wanted criminals, and other miscreants.

It’s another huge and successful con, perpetrated by marketers of cameras and software on an endlessly gullible audience who want desperately to believe that artificial intelligence is not artificial, that machine learning is real learning, that robots can do everything humans can do only better. The inconvenient truth is that facial recognition does not work.

London’s Metropolitan Police have been conducting trials of facial recognition software to see if it can identify people on their watch list in shopping-center crowds. In six of the trials studied by the University of Essex, only 20% of the people flagged by the software could be confirmed to be the people being sought. (The researchers also discovered that the watch list itself was seriously flawed by outdated information, but that was an old-fashioned screwup, not involving artificial intelligence.) The researchers hair caught fire, and they proclaimed that the 80% failure rate dictated that all use of the facial recognition software was illegal and must stop immediately. Metro Police declared themselves quite satisfied with the trial results and carried on.

South Wales Police admitted that in ten months of use, their facial recognition software had flagged 2,685 individuals as suspects. Of those, 2,451 were false alarms. An advocacy group called Big Brother Watch found the new police toy to be what it called “staggeringly inaccurate.” South Wales Police said they’re getting better at it, and are slogging on.

Despite its sorry record, “facial recognition” has joined words such as “artificial intelligence,” “machine learning,” and “driverless cars” in the pantheon of words guaranteed to induce hysteria among “wealth managers,” “job creators,” “opinion leaders” and other oxymoronic denizens of the upper one percent.

They have proved themselves pathetically unable to come up with any products that are useful, and that people would want to buy with little persuasion. They’ve been fiddling with smart phones and self driving cars and the Internet of Things and smart watches and robot waiters. But the only markets that are responding to their blandishments — as buyers, now, setting aside the investors and promoters — are governments hard at work to extend their military power externally and their police powers internally.

And the thing about the purveyors of stuff to the military, or to the militarized police: It doesn’t have to work. Check out the F-35 fighter jet that can’t fight, the aircraft carrier Gerald R. Ford that can’t carry aircraft, or the ultra modern destroyer USS Zumwalt, which may not be able to stay afloat in following seas. In financial terms, compared to these gargantuan quagmires, facial recognition is a minor blip, yet it perfectly illustrates the fact that fanciful promises about what a thing might, one day, accomplish can and do totally obscure that fact that the thing does not work. At all.

It’s not hard to find the upside when weapons of war don’t work, because they can’t kill people if they don’t work. But what is the upside of deploying a technology that is already causing hundreds and hundreds of people to be snatched out of shopping centers and off the streets and denied access to airplanes because they vaguely resemble some bad actor in a blurry photo?

This search for an upside is going to be really hard. Sounds like a job for artificial intelligence.