Regulating AI: what to expect from the EU

‘If we don’t change direction soon, we’ll end up where we’re going.’ (in Life 3.0 by Max Tegmark)

Artificial Intelligence (AI) has been hailed as the harbinger of deep societal change around the world. Indeed, it is hard to think about a sector of human activity to which AI technology will not permeate, much like the internet before it. It is based on that basic premise — widely shared among experts — and the rapid growth of global investments in the technology, that calls for regulation have been multiplied across private and public sectors.

As expected, the European Union (EU) is one of the first movers in regulating AI — though still far from the US and China in terms of investments. More precisely, it is planning, inter alia, on proposing an ethical framework for research in the field; incentivizing careful investments; tackling the shortage in AI expertise and talent; developing new market monitoring tools; and even discussing granting the status of ‘electronic personalities’ to ‘self-taught robots’.

The EU, regarded by legal specialists as the regulatory powerhouse of the world, has a long history of setting precedents in regulating new technologies and consequently laying the legal groundwork for more governments to build on in their national approaches. The recent General Data Protection Regulations (GDPR), for example, codified clear expectations for companies dealing in the EU and offered mechanisms for citizens to claim their rights vis-à-vis the companies handling the data they generate. As such, this massive corpus of regulations prompted many governments, China among them, to revisit their own approach to the internet and data protection.

What follows is a list of nascent initiatives, discussions, and propositions from the EU that might very well be heralding global policies on AI.

Initializing a Regional Strategy

This is a good place to pause and remind everyone that regulating does not always mean impeding. In fact, the European Political Strategy Centre (EPSC), a European Commission (EC) in-house think tank, released a report earlier this year on regulating AI in which they mainly focus on suggestions to accelerate the development of AI technologies.

The report aimed at advising EU policymakers that seemed caught off guard by the rapid growth of AI-powered technology. Hence, 4 core themes stream through the report:

Theme 1: Support

Feed AI systems by allowing a free flow of non-personal data in the region. Increase high-speed connectivity across the union. Develop AI hubs in EU member countries and fund diverse research environments (in part to limit all too common biased outcomes). Create a platform for private and public input to flow directly into policymakers’ reports.

Theme 2: Educate

Tackle digital illiteracy (37% of the EU workforce has no basic digital skills). Funding for higher education in AI-related fields and creating opportunities for work in the EU. Awareness campaigns on the dangers of optimized data (addiction, online harassment, etc). Urge platforms to detect ‘vulnerable users’ and limit their self-harming behaviors.

Theme 3: Modernise/Enforce

Create sophisticated tools to monitor the regional evolution of AI technology. Past policy tools from the ‘analog age’ fail to track all market distortions, algorithmic discrimination, price coordination or even mergers with potential future negative consequences. Permit AI-powered decisions taken by ‘augmented public officials’. AI, machine learning, data analytics professionals to be recruited in public offices. Enforcing quality by penalizing those that deviate into developing risky AI technology.

Theme 4: Steer

Highlight quality differences — made so by regulations — in AI technologies from the EU vs the rest of the world. Setting/suggesting universal standards for ethical development and making new technologies ‘lawful by design’. Suggest a new status for ‘self-taught robots’. Keeping humans, as much as possible, as the final decision maker in any AI-powered decision process.

During the summer of 2018, the EU set forth its plans. Although the previous report and many members have called for a European Agency for Robotics and Artificial Intelligence, the EU took a slower approach by first assembling a team of 52 world leading experts from academia, civil society and industry to work closely with the EC and support the ‘implementation of the European strategy on AI’.

The so-called High-Level Expert Group on AI (AI HLEG) will also be drafting the first AI ethics guidelines which will be focusing on:

“fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection, and non-discrimination.”

The group debuted its activity with expert workshops on the 20th of September. Nevertheless, nothing substantive has been shared for the moment by the closed group.

EU AI Alliance

Finally, the EU has also picked up on the recommendation to create a platform enhancing cooperation between ‘businesses, consumer organizations, trade unions, and civil society bodies’. Dubbed as the European AI Alliance (EAIA), the platform serves as a forum for the exchange of ideas, recommendations, and growth. It will also offer access to an ‘open library’ and some ‘official documents on AI’. Much like our very own Decentralised Artificial Intelligence Alliance (DAIA), the EU aims to prioritize a multi-actor approach for the development of AI technology:

“By creating an interconnected system of machines and adopting AI-powered technologies, European companies would obtain an ‘AI-multiplier’ effect.”

Sounds familiar? Here is our own approach:

“…bring together companies, foundations, and labs operating at the intersection of AI and blockchain technology. The alliance will provide a medium for member organizations to coordinate standards, protocols, interfaces, and other technical matters, along with organizing community events, providing networking opportunities, and giving legal and management guidance.”

The EAIA, however, is still very much vague in its objectives and memberships are distributed at the discretion of the AI HLEG. What we know about its activity at the moment can be boiled down to this: the Alliance will be organizing its first public conference in 2019.

The EPSC thus provides us with a clear idea of the direction that the EU wishes to take vis-à-vis the flourishing field of AI. As described, certain measures have been adopted already, and the rest of the report, along with the activity of the AI HLEG, may be the best indication we have at the moment regarding future policies. It is important to also keep in mind that national initiatives from EU members exist — mainly from France and Germany — and provide hints on the upcoming regulatory environment in the European region.

How can you get involved?

SingularityNET has a passionate and talented community which you can connect with by visiting our Community Forum. Feel free to say hello and to introduce yourself here. Exchange ideas, chat with others and share your knowledge with us and other community members on our forum.

We are proud of our developers and researchers that are actively publishing their research for the benefit of the community; you can read the research here.

For any additional information, please refer to our roadmaps and subscribe to our newsletter to stay informed about all of our developments.