As we’ve previously blogged, lawmakers in the European Union are reflecting intensively on the problem of illegal and harmful content on the internet, and whether the mechanisms that exist to tackle those phenomena are working well. In that context, we’ve just filed comment with the European Commission, where we address some of the key issues around how to efficiently tackle illegal content online within a rights and ecosystem-protective framework.

Our filing builds upon our response to the recent European Commission Inception Impact Assessment on illegal content online, and has four key messages:

There is no one-size-fits-all approach to illegal content regulation . While some solutions can be generalised, each category of online content has nuances that must be appreciated.

. While some solutions can be generalised, each category of online content has nuances that must be appreciated. Automated control solutions such as content filtering are not a panacea . Such solutions are of little value when context is required to assess the illegality and harm of a given piece of content (e.g. copyright infringement, ‘hate speech’).

. Such solutions are of little value when is required to assess the illegality and harm of a given piece of content (e.g. copyright infringement, ‘hate speech’). Trusted flaggers – non-governmental actors which have dedicated training in understanding and identifying potentially illegal or harmful content – offer some promise as a mechanism for enhancing the speed and quality of content removal. However, such entities must never replace courts and judges as authoritative assessors of the legality of content, and as such, their role should be limited to ‘fast-track’ notice procedures.

– non-governmental actors which have dedicated training in understanding and identifying potentially illegal or harmful content – offer some promise as a mechanism for enhancing the speed and quality of content removal. However, such entities must never replace courts and judges as authoritative assessors of the legality of content, and as such, their role should be limited to ‘fast-track’ notice procedures. Fundamental rights safeguards must be included in illegal content removal frameworks by design, and should not simply be patched on at the end. Transparency and due process should be at the heart of such mechanisms.

Illegal content is symptomatic of an unhealthy internet ecosystem, and addressing it is something that we care deeply about. To combat an online environment in which harmful content and activity continue to persist, we recently adopted an addendum to our Manifesto, where we affirmed our commitment to an internet that promotes civil discourse, human dignity, and individual expression. The issue is also at the heart of our recently published Internet Health Report, though its dedicated section on digital inclusion.

As a mission-driven not-for-profit and the steward of a community of internet builders, we can bring a unique perspective into this debate. Indeed, our filing seeks to firmly root the fight against illegal content online within a framework that is both rights-protective and attune with the technical realities of the internet ecosystem.

This is a really challenging policy space, to which we are committed to advancing progressive and sustainable policy solutions.

Read more: