Google has insisted that its robots are now superior to humans at identifying and blocking extremist videos, saying that they flag three in four offensive videos from YouTube before they are reported by users.

The search giant said it more than doubled the number of illegal videos deleted from its platform last month after it adopted artificial intelligence moderators to help police content.

"With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge," said Google. "But over the past month, our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we’ve taken this kind of content down."

Google said the AI, which can spot offensive content before human users flag it, identified more than 75 per cent of videos removed from YouTube last month. The company said the robot's ability to spot illegal content has "improved dramatically" and that it is now more accurate than humans at flagging videos.

"While these tools aren't perfect, and aren't right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed," said Google.

Despite this, Google said it will also hire more people to review and enforce its policies on the video sharing site, as part of a package of changes designed to placate critics who say it YouTube has become a propaganda channel for terrorist groups.