Our teams working on technical safety, ethics, and public engagement aim to address these questions and more. We help anticipate short and long-term risks, explore ways to prevent these risks from happening, and find ways to address them if they do.

We believe this approach also means ruling out the use of AI technology in certain fields. For example, we’ve signed public pledges against using our technologies for lethal autonomous weapons, alongside many others from the AI community.

These issues go well beyond any one organisation. Our ethics team works with many brilliant non-profits, academics, and other companies, and creates forums for the public to explore some of the toughest issues. Our safety team also collaborates with other leading research labs, including our colleagues at Google, OpenAI, the Alan Turing Institute, and elsewhere.

It’s also important that the people building AI reflect the broader society. We’re working with universities on scholarships for people from underrepresented backgrounds, and support community efforts such as Women in Machine Learning and the African Deep Learning Indaba.