The argument is that facial recognition helps tackle crime. But at what cost? It was too high for San Francisco. Last week it became the first city in the US to ban the use of facial recognition technology by police. Privacy was a factor. If you are on a biometric database, facial recognition operates as a form of ID check, but without you even realising it. The old idea that it is the state that must identify itself to free citizens, not the other way round, is turned on its head.

That’s if the cameras correctly identify the individual, because their accuracy is currently woeful. This, it seems, is a particular problem for ethnic minorities and women. Perhaps this wouldn’t be such an issue if our society didn’t have such a slavish belief in the efficacy of machines, and figures of authority had some scepticism about their shiny new toys. But a “computer says no” mentality pervades our institutions.

And how is this information about our vital characteristics – the contours of our face, the look of our eyes – being stored? The police already hold millions of searchable images. Is the intention for all citizens to eventually be “searchable”? And can we trust the authorities to keep this information secure, or to ensure it is not misused? There is currently no UK law regulating the use of facial recognition cameras. If one is introduced, it should be one that follows San Francisco’s lead and bans them.

Perhaps you imagine that the only people who will suffer under the unrelenting use of this technology will be criminals. But even for innocent people, it will be hard to escape the feeling that we are being constantly watched, our every move scrutinised. Life will be like driving along one of those awful average speed check zones on motorways. Except without the happy knowledge that, at some point, you will return to the freedom of the open road.