Home secretary Sajid Javid has said he will "take action" if major technology firms don't do more to tackle the spread of child sexual abuse imagery online Carl Court/Getty Images

The tech industry is not doing enough to fight online child sexual abuse. That was the thrust of home secretary Sajid Javid’s speech this week, as he said he was “demanding” that technology companies do more or face legislation.

“If the web giants do not take more measures to remove this type of content from their platforms, then I won’t be afraid to take action,” he said, threatening government action, which would be shaped by “the action and attitude that the industry takes.”


Javid’s call to action comes alongside new figures from the National Crime Agency (NCA), which reveal referrals of child abuse images have swollen by 700 per cent since 2012. There are a number of possible factors behind the rise, from ease of image sharing to more diligent reporting of abusive content. Authorities, though, are pushing the message that the tech industry has an essential role to play, with the NCA saying that technology companies doing more to remove indecent images from circulation would be a "monumental landmark" in child protection.

Problem is: what does ‘doing more’ look like?

Read next Back at work? So are burglars. Here's how to keep your home safe Back at work? So are burglars. Here's how to keep your home safe

“We’ve heard this all before,” says Alex Krasodomski-Jones, a researcher working for the Centre for the Analysis of Social Media at cross-party think-tank Demos. “I’m no big fan of social media giants, but they’ve heard this once every four months.” He also questions whether it is fair to place blame solely on “web giants” such as Google and Facebook.

“Is this going after the same big tech companies Amber Rudd was targeting [around encryption]? If so, that’s disappointing. Those companies are working extremely hard and they have the resources.”


Facebook, Twitter, and Google all use Microsoft’s PhotoDNA image hashing system, which compares images against a vast database of previously flagged material, with the aim of preventing duplicates being uploaded in the first place.

The problem, Krasodomski-Jones says, is the “myriad of sites that don’t have the same resources”. And they're not the government’s much-criticised tech giants.

The latest report from the Internet Watch Foundation (IWF), for example, notes that image hosting sites are abused “significantly more” than other services, hosting 72 per cent of referred child sexual abuse images in 2016, compared to only 1.1 per cent on social networks. These sites let users upload images that are then made available through a unique URL, and are often only run by small staffs — there is a gulf between the mass of images hosted by these sites and the humans able to monitor them.

Read next Huge fleets of Chinese boats have been hiding in North Korean waters Huge fleets of Chinese boats have been hiding in North Korean waters

One crucial area of focus, therefore, is urging large companies to make their capabilities to identify illegal content available to smaller platforms. Alongside Sajid’s warning, Google announced it would share an AI toolkit — which is able to identify child abuse content through image processing — with NGOs and industry partners.


If deep neural networks can help organisations flag up potentially abusive material to human moderators, that could well be of importance to tiny sites that host enormous numbers of images but have limited resources to monitor them. Whether this can be rolled out on a global level is another matter.

“An example I would use on a global basis is whether a Russian social media company would want to cooperate with a US social media company. Politically, that’s something that might not take place,” explains Fred Langford, deputy CEO of the IWF. And that, right there, is the big problem facing big tech.

Social media giants might host only a small percentage of child sexual abuse material, but that doesn’t mean they are completely guiltless. During his speech, Sajid also called on tech firms to address grooming on their platforms.

He said a meeting is planned for November between industry experts, in partnership with Microsoft, to come up with tools that can detect when a predator is grooming a child via online chat.

Read next How video hearings broke justice and stripped people of their rights How video hearings broke justice and stripped people of their rights

“We must not forget that the first step towards abuse is seldom through the dark web, but through the popular social media sites that children use every day,” says Peter Wanless, CEO of children’s charity NSPCC.

“The last decade has shown that, left to themselves, social networks will not face up to this. That’s why to tackle these crimes at source — and disrupt abuse before it escalates — effective, enforceable regulation with teeth must be part of [the] government’s response.”

Preventing grooming could, however, prove much tougher than using image recognition to spot abusive images. “The problem is a lot of it is nuanced; whether it’s someone grooming, or two people of the same age discussing sex,” says Langford. “To the best of my knowledge a solution hasn’t been found that would tackle it.”

He mentions the scenario of a predator contacting 100 children in one night. “If they get one ‘bite’ then the other 99 haven’t been victims of grooming," Langford explains, but the data of those conversations is still relevant to tracking the predator. In that case, where do we draw the lines around what constitutes ‘grooming’, and how can data be leveraged in a way that doesn’t cross privacy lines?

“It needs to be a nuanced response,” says Rob Jones, NCA lead for tackling child sex abuse. “Intervening in communications where there is a concern is something platforms need to be prepared to do. But a nuanced approach is needed. It’s not as black and white as the movement of images.”

Turning to artificial intelligence to sift through millions and millions of online conversations might seem like an obvious solution — but there is no guarantee that technology alone will cut it. “The idea that we should build a magic algorithm is pretty unlikely [given the complexity of the issue]. The technology is good but by no means perfect,” Krasodomski-Jones says.

He adds that rallying against big tech also runs the risk of masking deeper, underlying issues around who holds the responsibility for policing the internet. “There’s a crisis in law enforcement as it moves away from the notion of a policeman in every community, to the needs of policing a massive online space. It needs more cooperation and more agreement.”


The relationship between big tech firms, smaller sites, government and law enforcement is nebulous, but a threat as complex as online child sexual exploitation – now perpetrated through a cluster of ever-evolving techniques, including live-streaming and convoluted URL redirects often used by the advertising industry – can only be tackled through more collaboration at all levels.

“There’s a lot of work to be done of [moderation] communities coming together,” says professor Sue Black, a forensic anthropologist who has developed a breakthrough technique for tracking paedophiles by identifying marks on the backs of their hands.

“There’s no question that our enablement of digital technology has eased the access of these images,” says Black, noting not only the development of smartphones and social media but also the economics of the child abuse industry. “If tech has helped to enable the industry, it could hamper the industry as well.”