With the UK free to diverge on regulations, we can forge our own path forward

The European Commission has just unveiled its artificial intelligence (AI) strategy, one that it hopes will turn Europe into the global leader via a regulatory framework “based on excellence and trust”. These are laudable aims – the only problem for Brussels is that their plans risk having the opposite effect.

AI’s use is booming. Businesses are increasingly employing this technology, which allows computers to perform tasks that ordinarily require human intelligence, to automate work and to make more informed decisions. More than ten thousand AI companies have been founded since 2015, with over $37 billion of private investment. A global arms race has already begun as countries battle to lead this technology revolution.

The EU’s proposals encourage many best practices. AI should learn from good data. Decisions should not be made by inexplicable “black boxes” but easily understood. There should be human oversight over decisions. Companies should document how they developed their AI. Citizens should know when they are interacting with an AI. These are sensible expectations that help ensure the technology performs effectively and ethically.

The broader plans of the European Commission are more confused. The EU is hinting at much tougher regulations. These risk unintended consequences that hinder their progress towards AI leadership and limit people's ability to benefit from progress.