As governments around the world plan for their AI-powered futures, the UK is preparing to take on a somewhat scholarly and moral mantle. In a report published today by the House of Lords, which will be used to guide future government policy, a committee recommended that the UK “forge a distinctive role for itself as a pioneer in ethical AI.”

Doing so would allow the UK to play to its “particular blend of national assets,” write the report’s authors, and guide global development in the field. These assets include leading universities, a thriving legal industry, and “world-respected institutions such as the BBC.” The report suggests that the government sponsor more basic research into AI to develop its role, and convene a global summit in London next year to create a “common framework for the ethical development and deployment of artificial intelligence systems.”

The UK can’t outspend AI leaders like China and America

The recommendations are ambitious, but essentially pragmatic. The report’s authors are quick to point out that when it comes to funding research and generating international tech companies, the UK just can’t compete with larger nations. “Given the disparities in available resources, the UK is unlikely to be able to rival the scale of investments made in the United States and China,” write the authors.

According to figures in the report produced by Goldman Sachs, between 2012 and 2016 the UK invested around $850 million in AI, making it the third highest investor of any country. But this pales in comparison with the $2.6 billion invested by China over that same period and the approximately $18.2 billion invested by the US. The report says the UK should compare itself with nations like Germany and Canada, rather than these superpowers. But a more revealing comparison — not mentioned by the authors — might be France, which last month announced nearly €1.5 billion ($1.8 billion) in new investment for AI by 2022.

Imagining the UK’s future in AI is only one part of the document though, and the full reports (which makes for an accessible and enjoyable read) take in a number of other important topics. These include threats to employment and the need to fund training schemes for adults who lose their jobs to automation; the need for a new approach to data, which gives individuals greater control; and challenges posed by biased algorithms in society. “The prejudices of the past must not be unwittingly built into automated systems,” says the report’s summary.

The authors of the report also draft what they call an “AI Code,” which they say could be adopted nationally and even internationally. This code is one of many created by private institutions and governments, and it includes five basic principles:

Artificial intelligence should be developed for the common good and benefit of humanity

Artificial intelligence should operate on principles of intelligibility and fairness

Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families, or communities

All citizens should have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside artificial intelligence

The autonomous power to hurt, destroy, or deceive human beings should never be vested in artificial intelligence

“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these,” said the chairman of the committee that produced the report, Lord Clement-Jones. “An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

Read more: Here are some of the ways experts think AI might screw with us in the next five years