Wednesday, October 11th, 2017 (10:44 am) - Score 2,766

The Government has today published their Internet Safety Strategy green paper, which sets out how they intend to tackle online dangers like cyber-bullying, trolling and under-age access to porn. But it also appears to soften earlier plans for a mandatory approach to network-level ISP internet censorship.

Broadly speaking the new paper, which will help to form a foundation for the Government’s forthcoming Digital Charter, doesn’t include much that would concern internet access (broadband) providers. Instead it appears to be predominantly focused upon internet content providers (e.g. social networks like Facebook).

Key Proposals of the Internet Safety Strategy * A new social media code of practice to see a joined-up approach to remove or address bullying, intimidating or humiliating online content. * An industry-wide levy so social media companies and communication service providers contribute to raise awareness and counter internet harms. * An annual internet safety transparency report to show progress on addressing abusive and harmful content and conduct. * And support for tech and digital startups to think safety first – ensuring that necessary safety features are built into apps and products from the very start.

We should first point out that the proposal for an “industry-wide levy“, which could in the future be underpinned by legislation, isn’t quite the hard-hitting industry tax that it is perhaps made to sound like. Instead the Government explains that they will “look to secure contributions on a voluntary basis through agreements with industry, and we will seek industry involvement in the distribution of the resource.”

On top of that we also found it interesting to note that the Government appears to have changed their stance on earlier demands for broadband ISPs to all adopt mandatory network-level internet filtering (parental controls), which would by default have required providers to censor websites that contain adult content (often this extends beyond porn). This was a common debate during 2015 and 2016, albeit one that seemed to conflict with the EU’s Net Neutrality safeguards.

At present all of the largest broadband ISPs (Sky Broadband, BT, Virgin Media and TalkTalk etc.) already help to block so-called “adult content” from young eyes via a self-regulatory approach (i.e. new and existing customers are given a choice about whether or not to enable the filtering). Initially the fear was that the Digital Charter might attempt to make this mandatory but the language points to support for self-regulation.

Green Paper’s Position on UK ISP Parental Controls The current industry-led self-regulatory approach on parental control filters works well, as it encourages parents to think about online safety, but applies filters where they are not engaged. Internet Service Providers (ISPs) are best placed to know what their customers want, and to deliver flexible parental control tools that keep up-to-date with rapid changes in technology. A mandatory approach to filters risks replacing current, user-friendly tools (filtering across a variety of categories of content, but built on a common set of core categories) with a more inflexible ‘top down’ regulatory system. The control filters offered by the ‘big four’ ISPs (Sky, Virgin Media, TalkTalk and BT), cover the vast majority of UK subscribers. ISPs have transparent mechanisms in place for anonymous reporting of any ‘over-blocking’, and allow customers to ‘white-list’ sites.

On the other hand it’s worth remembering that the new Digital Economy Act 2017 (summary) has already introduced an age-verification system for websites that contain pornographic content, which is due to be enforced in 2018. Under this approach the British Board of Film Classification (BBFC) will gain the power to force ISPs into blocking porn websites that fail to put “tough age verification measures” in place.

In that sense the Government’s softer tone towards the existing self-regulation approach by ISPs may be a moot point, although it will be interesting to see whether any blocks enforced via the BBFC also impact broadband subscribers that haven chosen not to enable Parental Controls on their connection.

In fact quite a lot of what has been proposed today is really just a reflection of the measures that were introduced earlier this year through the DEAct.

Karen Bradley, DCMS Secretary of State, said: “The Internet has been an amazing force for good, but it has caused undeniable suffering and can be an especially harmful place for children and vulnerable people. Behaviour that is unacceptable in real life is unacceptable on a computer screen. We need an approach to the Internet that protects everyone without restricting growth and innovation in the digital economy. Our ideas are ambitious – and rightly so. Collaboratively, government, industry, parents and communities can keep citizens safe online, but only by working together.”

The desire to make the United Kingdom the “safest place in the world to be online” will also result in the police treating online abuse in the same way as they do offline, which they will partly achieve by establishing a new national police online hate crime hub to “act as a single point through which all reports of online hate crime are channeled.”

We fear that the sheer volume of racist comments, bullying and hate speech that occurs online may quickly overwhelm the new hub, particularly as it is due to begin operating by the end of 2017 and will only be supported by an initial funding level of just £200,000 per year. In no way is that enough to make any serious dent in the problem.

One other big concern is that the drive to rid the internet of bad content could risk running into a conflict with freedom of speech and the perilously tedious issue of context. Lest we forget that there’s the problem of how you define “hatred” and “terrorism” online in the first place and then separate that from related content that may include criticism of the same subject, as well as satire, the right to cause offence, political free speech and so forth.

We can see a future where big commercial websites are all adopting automated filtering systems in order to keep themselves safe from the law, not least because manual validation of every piece of user submitted content would be impossible. But as a consequence those filters might easily end up removing harmless content and impacting free speech. This certainly appears to be the EU’s current direction of travel (here).

UPDATE 2:06pm

The UK Internet Services Providers’ Association (ISPA) has chimed in with a comment.