A new report on Wednesday (27 November) warns EU institutions and member states over new facial recognition technologies, saying collecting facial images of individuals without their consent or chance to refuse "can have a negative impact on people's dignity".

The report from the European Agency for Fundamental Rights (FRA) points out that the risk of errors and data leakages raises fundamental rights concerns related to privacy, human dignity, personal data protection or non-discrimination.

Student or retired? Then this plan is for you.

For example, the use of facial recognition technologies during demonstrations might create "a chilling effect", preventing people from exercising their freedom of assembly, expression or association, said Diego Naranjo, head of policy at NGO European Digital Rights (EDRi).

"Until the necessity and proportionality of the use of these technologies for "security" purposes are proven, governments must refrain from using them," he added.

However, more and more European countries have been testing these technologies in the last few years by law-enforcement authorities.

The UK has been a pioneer using facial-recognition technologies to identify people in real-time with street cameras.

Earlier this year, the the Swedish data protection authority, Datainspektionen, authorised police to use facial recognition to help identify criminals.

In Hungary, a project called Szitakötő (dragonfly) plans to deploy 35,000 cameras with facial recognition capabilities in Budapest, and across the country.

But, according to the senior legal officer at the European Consumer Organisation (BEUC), David Martin, "consumers should be told when companies or governments use facial recognition, having the chance to say no".

Facial recognition in EU

The international police agency Interpol has been using this technology since 2016.

Yet, the deployment of facial-recognition systems is expected to be used on a large scale in European technical systems for asylum, migration and security purposes, for example, when applying for a visa, crossing the border or requesting asylum, the FRA report states.

The report identifies six major IT systems used by the EU, five of which are set to process facial images for migration and security, including the Schengen information system or the entry-exit system.

The accuracy of this technology is mainly based on the data quality used to create the software, and the quality of data used when is deployed, states the report.

However, the risk that errors may lead to discrimination of certain minorities raises concerns among civil society.

"It could be the case that facial recognition technology dramatically increases the number of false positives, for example, people arrested although they have not committed a crime," said Nicolas Kayser-Bril from NGO AlgorithmWatch, who believes that there is little evidence that facial recognition helps the fight against crime.

Facial images constitute "biometric data", EU law states, as they can be used to identify individuals.

As a result, video surveillance carried out by the law enforcement authorities for prevention, investigation, detection, and prosecution of criminal offences is governed by the General Data Protection Regulation (GDPR).

However, exercising many GDPR rights requires that "the person is aware that their personal data are stored there", FRA report points out.

Call for legislation

In June, the EU's high-level expert group on Artificial Intelligence (AI) put forward a set of recommendations and ethical principles for the use of these technologies, including facial recognition, which is currently "being tested," the commission spokesperson said earlier this year.

One of the critical concerns listed in the document is the "automatic identification [of individuals]" because "it raises strong concerns of both a legal and ethical nature".

"Individuals should not be subject to unjustified personal, physical or mental tracking or identification, profiling and nudging through AI-powered methods of biometric recognition such as: emotional tracking, empathic media, DNA, iris, and behavioural identification, affect recognition, voice, and facial recognition and the recognition of micro-expressions," the expert group said.

Application of facial recognition technologies "must be clearly warranted in existing law," where the legal basis for such activity should be the consent of the data subject, it added.

However, the FRA report points out that "a clear legal framework must regulate the deployment and use of facial recognition technologies" - although monitoring by independent supervisory bodies is also needed.

The EU commission president-elect Ursula von der Leyen said that she will put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence within the first 100 days of her mandate.

However, NGO Algorithm Watch believes that the commission's fast-track for AI legislation will likely leave many aspects of automated decision-making to one side.