Today, the advertising and technology sectors presented the world’s first ever Code of Practice on Disinformation. Brokered in Europe, and motivated by the European Commission’s Communication on Tackling Disinformation and the report of the High Level Expert Group on Fake News, the Code represents another step towards countering the spread of disinformation.

This initiative complements the work we’ve been doing at Mozilla to invest in technologies and tools, research and communities, to fight against information pollution and honour our commitment to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.

The Code is the result of intensive work within the advertising and online platform sectors, including Google, Facebook, Twitter, Mozilla, and EDiMA, as well as IAB Europe, the World Federation of Advertisers, and EACA, EASA, and AIM. These organisations comprised the Working Group, which worked on the code within the Multistakeholder Forum on Disinformation, a process established and shepherded by the European Commission.

Building on the approach outlined in the High Level Group’s Report, the Code addresses five key areas and outlines a set of commitments for each. These include:

Scrutiny of ad placements: to deploy policies and processes to disrupt advertising and monetisation incentives for purveyors of disinformation;

to deploy policies and processes to disrupt advertising and monetisation incentives for purveyors of disinformation; Political and issue-based advertising: to enable public disclosure of political ads, and to work towards a common understanding of “issue-based advertising” and how to address it;

to enable public disclosure of political ads, and to work towards a common understanding of “issue-based advertising” and how to address it; Integrity of services: to put in place – and enforce – clear policies related to the misuse of automated bots;

to put in place – and enforce – clear policies related to the misuse of automated bots; Empowering consumers: to invest in products, technologies, and programs to help people identify information that may be false, to develop and implement trust indicators; and to support efforts to improve critical thinking and digital media literacy; and

to invest in products, technologies, and programs to help people identify information that may be false, to develop and implement trust indicators; and to support efforts to improve critical thinking and digital media literacy; and Empowering the research community: to strengthen collaboration with the research and fact checking communities and encourage good faith independent efforts to understand and track disinformation.

These key commitments are a good baseline for further work, and we’re hopeful this Code will serve to drive change in the platform and advertising sectors, and complement parallel approaches to tackle this issue. Of course, as with any law, policy, or joint initiative, the proof of its effectiveness will be in the implementation.

As we’ve underlined previously, disinformation is often legal content; it is crucial not to put private companies in the role of assessing truthfulness, nor should it be left to a government entity. This code achieves that balance by not encroaching on fundamental rights such as free expression and the right to privacy, while still outlining steps that companies should take to thwart disinformation.

The Code process isn’t quite finished — in early October the Commission plans to host an event where the Working Group members will officially sign the code and present a roadmap of actions to be carried out over the next year.

We are thankful for the diligence of those involved, and we look forward to finalising this process with the European Commission and our community to apply this Code in practice.

Find the Code and Annex of best practices here, and the statement of the Working Group here.