Soon the Brussels scene will heat up with yet another debate on regulation versus innovation.

This time the subject will be regulation of artificial intelligence.

As the first harbinger, in April the European Commission’s High Level Group on AI published Ethics Guidelines for Trustworthy AI.

Then last month the incoming European Commission president Ursula von der Leyen committed to putting forward AI legislation in the first 100 days of her mandate.

Last week the German Data Ethics Commission published in 240 pages its detailed views on AI regulation.

Two days ago (27 October), in parallel with a panel at the World Health Summit in Berlin, EIT Health published the results of a survey on the High Level Group’s AI & ethics guidelines, in a report, “AI and Ethics in the Health Innovation Community”.

The survey, carried out mainly amongst innovation projects and startups that are part of the European Institute of Innovation and Technology’s health community, reflects the views of those who are at the sharp end of working to deliver the benefits that AI so obviously promises for Europe’s hard-pressed healthcare systems.

What is at stake?

The EIT Health survey, even though small in size, with 82 respondents, has interesting findings. It addresses the whole scope of the AI & ethics guidelines, from human oversight and cybersecurity, to privacy, transparency and accountability.

While for 80 per cent of respondents this was their first exposure to the AI & ethics guidelines, they clearly are well-versed in both AI and ethics.

Some 60 per cent of the innovators expect their AI-based health innovations will be subject to regulation. Most respondents believe that AI will support rather than replace medical professionals and that human oversight will remain essential.

Transparency of AI is seen as a must, yet also a challenge. At the same time many appear to have confidence in current medical accountability practices.

A prominent issue in the World Health Summit debate was how to avoid a potential bias in input data, which might lead to discrimination in healthcare.

Regulation versus innovation

But what of the inevitable tension between AI regulation on the one hand, and innovation on the other? Will AI regulation have a positive, negative or neutral impact on innovation?

Although this question was not posed directly in the survey, comments of respondents and EIT Health case studies provide some insights. The main message is that we need a nuanced and flexible approach. There must be nuance to respond to the diversity of AI in health, and flexibility to respond to evolving reality.

In this respect we can learn from the past debates about the General Data Protection Regulation (GDPR), which also pitched innovation against regulation.

At the time, a number of tech companies were adamant in the view that GDPR would be really bad for innovation. Some of those same companies now embrace the GDPR as a really good thing.

On the opposite side of the argument were promoters of regulation who emphasised fundamental privacy rights, yet said little about innovation. These positions did not always come with solid evidence either.

Responsible policy making needs evidence and science-based analysis. That is even while recognising science has limitations. For example, most of us did not anticipate the threats to state-level sovereignty and strategic autonomy that now influence AI policy across the globe.

I am pleasantly surprised about the way members of the health innovation community are responding to the requirement for ethical AI. Their nuanced views are based on experience. Some have used the AI and ethics guidance as a trustworthy reference that helps them to focus on what they are best at - health innovation.

One case in point is applying of AI for clinical decision-making in the use of antibiotics or choice of malaria treatment.

We know that providing certainty, such as clear guidelines, certification, standardisation, and regulation, can be an enabler for innovation.

There are some who view the current guidance and rules as a rigid, blanket protection of the individual while the collective suffers. They fear there is a barrier to the sort of AI-based innovation that is urgently needed to address large-scale health challenges, but which requires the collection of data of very many individuals, leading to the development of algorithms that evidently work but which cannot readily be made transparent today. They make a plea for more flexible rules for certain areas of health innovation.

Yet others admit being confused about the balance between innovation and regulation. They need more examples.

Looking at these various shades of opinion, it is clear a nuanced debate makes a lot of sense. If properly run, we can weigh these different views and get to a deeper understanding. In Europe especially, we can bring policy approaches together in order to meet the needs of protection and innovation at the same time. It still implies that we should think hard how regulation can to leave room for experimentation, learning and skills development. That is, to enable innovation in combination with rule-making.

When it comes to AI and ethics, the world of health can be a leader rather than a follower. Health and ageing innovators can contribute much to ensure that policy meets practice in health and care, an area of shared interest for all citizens in Europe - and of the world.

Paul Timmers is Chief Adviser of EIT Health