Europe plans to strictly regulate high-risk AI technology

The European Commission today unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China.

The commission will draft new laws—including a ban on “black box” AI systems that humans can’t interpret—to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Although the regulations would be broader and stricter than any previous EU rules, European Commission President Ursula von der Leyen said at a press conference today announcing the plan that the goal is to promote “trust, not fear.” The plan also includes measures to update the European Union’s 2018 AI strategy and pump billions into R&D over the next decade.

The proposals are not final: Over the next 12 weeks, experts, lobby groups, and the public can weigh in on the plan before the work of drafting concrete laws begins in earnest. Any final regulation will need to be approved by the European Parliament and national governments, which is unlikely to happen this year.

Europe is taking a more cautious approach to AI than the United States and China, where policymakers are reluctant to impose restrictions in their race for AI supremacy. But EU officials hope regulation will help Europe compete by winning consumers’ trust, thereby driving wider adoption of AI.

“The EU tries to exercise leadership in what they’re best at, which is a very solid and comprehensive regulatory framework,” says Andrea Renda, a member of the commission’s independent advisory group on AI, and an AI policy researcher at the Centre for European Policy Studies. Eleonore Pauwels, an AI ethics researcher at the Global Center on Cooperative Security, says the regulations are a good idea. She says there could be public “backlash” if policymakers don’t find alternatives to what she calls “surveillance capitalism” in the United States and the “digital dictatorship” being built in China.

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The commission says it will also “launch a broad European debate” on facial recognition systems, a form of AI that can identify people in crowds without their consent. Although EU countries such as Germany have announced plans to deploy these systems, officials say they often violate EU privacy laws, including special rules for police work.

Pauwels, a former commission official, says the AI industry has so far demonstrated a “pervasive lack of normative vision.” But Vestager points out that 350 businesses have expressed a willingness to comply with the ethical principles drawn up by its AI advisory group.

The new AI plan is not only about regulation. The commission will come up with an “action plan” for integrating AI into public services such as transport and health care, and will update its 2018 AI development strategy, which plowed €1.5 billion into research. The commission is calling for more R&D, including AI “excellence and testing centres” and a new industrial partnership for AI that could invest billions. Alongside its AI plan, the commission also outlined a separate strategy to promote data sharing, in part to support the development of AI.