Artificial Intelligence technologies carrying a high-risk of abuse that could potentially lead to an erosion of fundamental rights will be subjected to a series of new requirements, the European Commission announced on Wednesday (19 February).

As part of the executive’s White paper on AI, a series of ‘high-risk’ technologies have been earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications.

Artificial Intelligence technologies coming under those two categories will be obliged to abide by strict rules, which could include compliance tests and controls, the Commission said on Wednesday.

Sanctions could be imposed should certain technologies fail to meet such requirements. Such ‘high-risk’ technologies should also come “under human control,” according to Commission documents. Areas that are deemed to not be of high-risk, an option could be to introduce a voluntary labelling scheme, which would highlight the trustworthiness of an AI product by merit of the fact that it meets “certain objective and standardised EU-wide benchmarks.”

However, the Commission stopped short of identifying technology manufactured from outside the EU in certain authoritarian regimes as necessarily ‘high risk.’

Pressed on this point by EURACTIV on Wednesday, Thierry Breton, the Commissioner for the Internal Market, said manufacturers could be forced to “retrain algorithms locally in Europe with European data.”

“We could be ready to do this if we believe it is appropriate for our needs and our security,” Breton added.

Another area in which the Commission will seek to provide greater oversight is the use of potentially biased data sets that may negatively impact demographic minorities.

In this field, the executive has outlined plans to ensure that unbiased data sets are used in Artificial Intelligence technologies, avoiding discrimination of under-represented populations in algorithmic processes.

More generally, Commission President Ursula von der Leyen praised Europe’s efforts in the field of Artificial Intelligence thus far, saying that such technologies can be of vital use in a number of sectors including in healthcare, agriculture and energy, and can also help Europe to meet sustainable goals.

Conformity Assessment

However, she also noted the importance of ensuring that certain AI technologies meet certain standards in order to be of use to European citizens. “High-risk AI technologies must be tested and certified before they reach the market,” von der Leyen said.

Along this axis, the Commission will establish an ‘objective, prior conformity assessment’ in order to ensure that AI systems are technically robust, accurate and trustworthy.

“Such systems need to be developed in a responsible manner and with an ex-ante due and proper consideration of the risks that they may generate. Their development and functioning must be such to ensure that AI systems behave reliably as intended,” stated the White Paper, which is open for public consultation until 19 May.

Such a conformity assessment could include procedures for testing, inspection and certification. Importantly, the Commission also states that such an assessment “could include checks of the algorithms and of the data sets used in the development phase.”

The EU’s Vice-President for Digital, Margrethe Vestager, said on Wednesday that an assessment will be made in the future as to whether this approach is effective or not.

Facial Recognition

Elsewhere in the Artificial Intelligence White Paper, the Commission held back on introducing strict measures against facial recognition technologies. A leaked version of the document had previously floated the idea of putting forward a moratorium on facial recognition software.

However, the executive now plans to “launch an EU-wide debate on the use of remote biometric identification,” of which facial recognition technologies are a part.

On Wednesday, Vestager noted that facial recognition technologies “in some cases are harmless” but a wider consultation is required to identify the extent to which remote biometric identification as part of AI technologies should be permitted.

The Commission also highlighted the fact that under current EU data protection rules, the processing of biometric data for the cause of identifying individuals is prohibited, unless specific conditions with regards to national security or public interest are met.

Article 6 of the EU’s General Data Protection Regulation outlines the conditions under which personal data can be legally processed, one such requirement being that the data subject has given their explicit consent. Article 4 (14) of the legislation covers the processing of biometric data.

In recent months, EU member states have been charting future plans in the field of facial recognition technologies.

Germany wishes to roll out automatic facial recognition at 134 railway stations and 14 airports. France also has plans to establish a legal framework permitting video surveillance systems to be embedded with facial recognition technologies.

Rights groups have called for more stringent measures to be enacted in the future against facial recognition technologies, in response to the Commission’s announcement on Wednesday.

“It is of utmost importance and urgency that the EU prevents the deployment of mass surveillance and identification technologies without fully understanding their impact on people and their rights, and without ensuring that these systems are fully compliant with data protection and privacy law as well as all other fundamental rights,” said Diego Naranjo, head of policy at European Digital Rights (EDRi).

[Edited by Zoran Radosavljevic]