People walk past a CCTV camera at King's Cross on August 16, 2019 in London. CCTV cameras using facial recognition are being investigated by the UK's data protection watchdog.

It almost comes naturally to many smartphone users today. You can just take out your iPhone — or Android equivalent — and hold it up to your face to unlock the device. But the technology behind that has become increasingly controversial of late, with business executives and regulators alike calling for oversight. Microsoft CEO Satya Nadella earlier this year said the technology warranted "any regulation that helps the marketplace not be a race to the bottom." While people are far more open to the idea of registering their portrait with Apple's Face ID, the idea of being spotted by an artificial intelligence-powered camera on the street has proven much more unnerving. This is the difference, tech executives and experts say, between consensual identity verification and non-consensual surveillance. The use of facial recognition technology in London's King's Cross area was met with much backlash earlier this month, drawing the attention of the U.K. data protection watchdog. It emerged that Argent, a property developer, had deployed the software in the space without people's knowledge. Argent was not immediately available for comment when contacted by CNBC. Some are calling for a ban of so-called live facial recognition, where surveillance cameras equipped with the technology scan people in public places. One of the biggest problems with face identification systems, independent researcher Stephanie Hare said, is that it involves biometric data — in other words, information about people's bodies. She thinks an outright ban on the technology should be one option on the table. "It needs to be treated in the same way that your DNA would be," Hare told CNBC. "They're in the same category of powerful data. What you could do with face recognition in terms of identifying someone in real time makes it a surveillance technology." And it's that issue of surveillance that has become a key concern for regulators. Britain's Information Commissioner Elizabeth Denham said she would launch a probe into how the software was used in London, adding she was "deeply concerned about the growing use of facial recognition technology in public spaces" by both law enforcement and the private sector. The privacy regulator has also been investigating the use of facial recognition by police.

UK police trials

Some police forces in the U.K. have conducted trials of the technology, which is being promoted by the Home Office. London's Metropolitan Police ended its pilot program, which was aimed at identifying criminals, last month. Researchers from the University of Essex found "significant flaws" with the Met's trial, adding that police deployment of live facial recognition technology "may be held unlawful if challenged before the courts." South Wales Police, on the other hand, has gone ahead with an app that lets officers run a snapshot of a person through a database of suspects to find potential matches. That's despite a court case against the force brought by the campaign group Liberty. Privacy campaigners at Big Brother Watch want the British parliament to step in. They think that lawmakers in the country should look to ban the technology from being used for monitoring people, rather than introduce regulation that sees it permitted under certain guidelines. Laws can take years to implement and even then policies would vary across different regions. "We're not asking parliament to regulate, we're asking parliament to immediately put a stop to it," Silkie Carlo, director of Big Brother Watch, told CNBC. "If anyone thinks it's feasible that live facial recognition for public surveillance is possible in a rights-respecting democracy, they'd have to make a pretty convincing argument."

Various police forces across the country have pushed back against the scheme. "That's not a positive thing," said Jason Tooley, chief revenue officer of biometric software maker Veridium. The worry for some is whether legislators take too heavy-handed an approach. "In terms of innovative technology, we want the police forces to be able to innovate to deliver better services," Tooley told CNBC. "What we've got to try to avoid here is that innovation is squashed or stopped." Biometric data is already covered by the European Union's GDPR, or General Data Protection Regulation, a data privacy overhaul that was introduced by the bloc last year. The rules call on companies to obtain explicit consent from consumers on the use of their personal information. In Sweden, a local authority was fined under GDPR for trialing facial recognition on high-school students. But recently it was reported that the EU is looking to tighten its laws around the use of facial recognition as part of an overhaul of how AI is regulated. Natasha Bertaud, deputy chief spokesperson for the European Commission, declined to comment on that report last week, but pointed to recommendations from a group of experts advising the EU executive body on its approach to AI. That group had suggested the EU consider the need for new regulation of biometric technologies like emotion tracking and facial recognition.

Tech firms 'ride the wave of public opinion'

So where do tech firms like Microsoft and Amazon sit in the regulatory debate swirling around facial recognition? Tech giants make "big claims about being on the side of privacy," but ultimately "ride the wave of where public opinion is," said Mike Beck, global head of threat analysis at cybersecurity firm Darktrace. Amazon's computer vision platform Rekognition — that's the one that can now apparently detect fear — has in the past been used by police in the U.S. That hasn't always sat well with the company's own shareholders, who earlier this year lumped pressure on the tech giant to stop selling the facial identification software to law enforcement.

But the company has — like Microsoft — said it wants to at least see guidelines established to ensure the technology is used ethically. "New technology should not be banned or condemned because of its potential misuse," Michael Punke, vice president of global public policy for Amazon's cloud business, AWS, said in a blog post earlier this year. Microsoft has repeatedly called on governments to regulate face recognition, with the firm's president, Brad Smith, having previously said that 2019 should be the year for regulation. Google, meanwhile, has said it will not sell the technology "before working through important technology and policy questions." Beck said that a ban on live facial recognition was "not the answer," adding regulation would need to address how biometric data is collected and handled by organizations. "Regulation is only part of the answer," he said. "Securing data when it is collected is as important as regulating the applications of the technology in the first place." Meanwhile, Gus Tomlinson, head of strategy at identity verification firm GBG, said a clear regulatory framework could help consumers understand the benefits of the technology — one of the benefits cited by Amazon is that Rekognition has been used to prevent human trafficking and find missing children. Tomlinson told CNBC that policymakers should ensure live facial recognition is only used for "purposes where there is a real legitimate interest."

'Perfect tool of oppression'

One big problem with facial recognition is it uses machine-learning algorithms that are fed abundant volumes of data on people's faces to be able to discriminate between one person and another. But that information can be discriminatory in its own right, as demonstrated by MIT researcher Joy Buolamwini, who published a paper that showed such systems are less likely to accurately identify ethnic minorities and women than white men. The combination of that with laying down the law is problematic, critics say, as it could result in cases of mistaken identity and people being wrongly arrested. Facial recognition "has a track record of misidentifying people of color, women and kids," Hare said. And even as the technology improves, it could become a "perfect tool of oppression," Carlo said, adding: "In extremis, you could live in a society where you have no chance of being anonymous." The Chinese government uses the technology widely. China has millions of surveillance cameras and almost all of its 1.4 billion citizens are included in a facial recognition database. Those government efforts have been criticized outside China, amid reports from human rights groups and others that the technology is used to track and monitor Uighur Muslims in the west of the country.