Facebook has been in the news again as more revelations about its abuse of users’ personal data emerge, but while the focus is on one surveillance corporation it’s easy to forget that Google, the other of the two tech giants which dominate online life, has many problems of its own. Last week brought us the news that Google’s subsidiary YouTube is now the most popular social media side among teens. Because of this, one of Google’s most important challenges is its responsibility to curb the insidious spread of extremist content on YouTube.

At the heart of the problem with YouTube is its recommendation system, which has a recognised tendency to push extremist, conspiracist, and white supremacist videos. As a result, Zeynep Tufekci has called it one of the most powerful radicalising instruments of the twenty first century, leading viewers down “a rabbit hole of extremism”. On YouTube, autoplay — whereby once a video ends another video, algorithmically recommended, plays automatically — is the default setting.

This means that it isn’t safe to leave children or even teenagers alone with YouTube. But even with autoplay turned off, YouTube recommends videos for users to watch next. What viewers end up seeing may bear no relation to where they started. Catherynne Valente observes that while previous generations may have watched cartoons (and may have watched too many cartoons), the Smurfs never turned to the camera to tell kids that feminism is cancer and that nobody will ever love them. Quite.

Many other websites and online services have recommendation systems. But the user WhitePride88 isn’t able to game, say, Spotify’s recommendation system to push neo-Nazi songs to its listeners. This is because Spotify doesn’t allow third parties to upload songs without review — it works with labels and royalty companies to procure content. It’s a curated system. Spotify chooses what goes into its recommendation system and so it has a significant degree of control over what’s recommended by that system.

Of course, those uploading extremist material have learned how to hide it amongst innocuous-seeming content. They couch much of their language in euphemism, and they make arguments which may seem relatively reasonable at first glance but which are predicated on lies and bad faith interpretations. They also aren’t likely to have a username which is so overtly racist as WhitePride88. This is why a teenager, perhaps lacking the knowledge and critical skills necessary to see through these tactics, might begin watching innocuous videos of people playing games and later emerge immersed in the hate-filled rhetoric of the alt-right. The people pushing it are smart, and they’ve learned how to play the recommendation game.

But this isn’t just a problem for teens. For obvious reasons it’s also an issue for vulnerable adults. And many others, raised with TV and film in the pre-internet age, are used to video being a somewhat reliable source of information and a relatively safe source of entertainment. When watching TV, they weren’t likely to encounter white supremicism framed as legitimate opinion or conspiracy theories pushed as fact unless they actively sought out InfoWars (or, less charitably, Fox News). When it comes to video, they might not apply the critical skills that they may bring to other forms of media, and with autoplay they might not realise that the channel has changed.

The fundamental difference between YouTube and Spotify is the one between open recommendation systems and closed ones. Open systems, such as on YouTube, allow anyone to upload content which feeds automatically into the recommendation engine. Closed systems, like the one found on Spotify, are managed, with the result that the content that they can recommend is curated to some degree.

This distinction points to a potential solution. On services which are prone to recommending extremism, content should only be added to the recommendation system after human review. The role of human content curators here is crucial. There is talk from companies like Google and Facebook of using AI systems to weed out fake news and extremist content, but this is foolhardy. AI is not a panacea, and in any case the automated nature of recommendation systems is a large part of the problem in the first place. Just as America’s problem with guns isn’t likely to be solved by more guns, so the problems caused by automation aren’t likely to be solved by more automation.

So how could this be implemented by Google? When it comes to YouTube, Google could decide to only add videos to the recommendation system automatically where they have been uploaded by vetted accounts. They could then only add other videos which are uploaded by other accounts to that system when they have been reviewed — by humans — for extremist content. While extremist videos would still be available on YouTube if users went looking for them (if Google wants to host that content then that’s its business), these changes would mean that they wouldn’t automatically and by default be recommended to viewers.

For this to work, Google would need to invest in reviewers who are provided with the resources, training, and support needed to do the job effectively. They would need to be paid well and given more than a handful of seconds to make a judgement about any given video. The review process itself would need to be transparent, based on consistently-applied and publicly available rules, and it would need to be accountable. This would be expensive, but Google can hardly claim that it’s short of money. This might just be the cost of doing business.

Yes, this brings Google into the murky world of content curation (or, I should say, content curation by people — it’s been curating content by algorithm for years). Yes, this could mean the end of Google’s claims that it isn’t ever a publisher (claims which have been on increasingly shaky ground of late anyway). So be it. If Google needs to act and be regarded as a publisher in relation to some of its services in order for it to be able to provide those services responsibly then maybe that is what should happen. And if it turns out that it can’t use recommendation systems in a responsible way then perhaps it shouldn’t use them at all.

Google is not the only company which has had problems with recommendation systems — Amazon has been known to recommend the ingredients needed to make bombs, for instance, and Facebook’s problems with, well, everything are well known. And there are good people at Google who care about their company’s impact on society. But so far, Google itself, like Facebook, has been unwilling to take the big steps needed to confront the challenges it faces. These companies have positions of significant influence, and it’s time that they took their responsibilities seriously. It’s time that Google fixed YouTube’s extremism problem.