Thank goodness for Canada’s privacy commissioners. They seem to be the only public authorities prepared to push back against the attempt to end the right to simply go about one’s business in public without being subject to official surveillance.

Certainly, we can’t trust the police on this front. Thanks to the Star’s Kate Allen and Wendy Gillis, we’ve learned that at least four Toronto-area forces have dipped their toes into this poisoned pool.

They’ve tested controversial facial recognition technology that, on the face of it, violates Canadian law and the right of ordinary people to basic privacy.

The police forces — in Toronto, Peel Region, Halton and Durham — all say they’ve stopped using the technology developed by the American company Clearview AI. As they should.

Nor, it seems, can we rely on our politicians. Neither the federal government nor the provinces have kept up with the new threats posed by technology to your privacy. They have lagged behind while the tech companies come up with increasingly sophisticated ways to make money off personal information.

Instead, it’s been left up the privacy commissioners to take the lead in this case, and they deserve three cheers for stepping up.

Four commissioners — those in Ottawa, British Columbia, Alberta and Quebec — announced on Friday that they will conduct a joint investigation into Clearview AI, dubbed by the New York Times as “the secretive company that might end privacy as we know it.”

It was the Times that shone a bright light on Clearview in January, reporting that the little-known start-up had developed a revolutionary facial recognition app designed to let police forces and others identify people by matching their photos with billions of images scraped from Facebook, Youtube, and millions of other sites on the internet.

Some 600 police forces are already using the app, according to the company, although some cities, including San Francisco, have banned it.

The potential for misuse is obvious. Anyone with access to such a tool could instantly find out all sorts of personal information about a person simply from an image captured in public. Not even a name would be needed to figure out a person’s movements, friends, employment history, associations, and so on.

Even Google, according to the Times, decided years ago that this was one area it wouldn’t venture into because the technology could be used “in a very bad way.”

Appropriating that kind of personal information is also almost certainly illegal in Canada.

Michael Bryant, executive director of the Canadian Civil Liberties Association, explained in the Star this past week that there must be “due process and and appropriate safeguards in place” before any kind of biometric information, such as DNA or fingerprints, are collected or stored. There must be “compliance with laws designed to recognize that biometrics are particularly personal, sensitive information.”

And what could be more personal than your own face? It seems hard to imagine that the privacy commissioners will not conclude that the kind of “facial fingerprinting” system developed by Clearview AI runs afoul of federal and provincial privacy laws, especially in the absence of any guarantees against misuse.

What’s needed is a comprehensive framework for the use of any such technology, and a pause in its use while those rules are developed.

Loading... Loading... Loading... Loading... Loading... Loading...

Legislators are also going to have to do much better in keeping up with the constant innovations of the tech world, both for good and less-good. Canadian governments have been particularly slow to meet the challenges of Big Tech, and to update our laws to deal with new threats to privacy.

In that regard, they should make sure that privacy commissioners have broader powers to enforce the law, by levying substantial fines if necessary on companies that violate or disregard it. Without that, our watchdogs may bark loudly, but they can’t bite hard.

Read more about: