Video-sharing site responds to criticism over objectionable content by publishing report into scale of its moderation process

YouTube says it removed 8.3m videos for breaching its community guidelines between October and December last year as it tries to address criticism of violent and offensive content on its site.



The company’s first quarterly moderation report has been published amid growing complaints about its perceived inability to tackle extremist and abusive content.



YouTube, a subsidiary of Google’s parent company, Alphabet, is one of several internet companies under pressure from national governments and the EU to remove such videos.

It said the report was an important first step in dealing with the problem and would “help show the progress we’re making in removing violative content from our platform”.

In a blogpost, YouTube said it removed more than 8m videos between October and December 2017. “The majority of these 8m videos were spam or people attempting to upload adult contentand represent a fraction of a percent of YouTube’s total views during this time period,” the post said.

YouTube said 6.7m were first flagged for review by machines rather than humans; of those, 76% were removed before they received a single view.

YouTube has also been criticised over content it allows. Days after the mass murder at Marjory Stoneman Douglas high school in the US in February, videos were promoted that claimed the survivors were “crisis actors” implanted to build fake opposition to guns.



One clip briefly became the number one trending video on the site before it was removed for violating policies on harassment and bullying. YouTube’s community guidelines do not specifically ban misinformation or hoaxes, although the company has announced plans to link to Wikipedia pages for the most obvious conspiracy theories.

Google has promised to have more than 10,000 people working on enforcing its community guidelines by the end of 2018, up from “thousands” doing the job last year. They will be largely, but not entirely, human reviewers working on YouTube. It will also include engineers working on systems such as spam detection, machine learning, and video hashing.

The current removal process requires suspect content to be initially flagged, before it is watched to see if it breaches community guidelines, before a decision is made on its removal.

The vast majority of videos taken down – more than 80% – were flagged as suspect by one of Google’s automatic systems, the company said, rather than an individual.

Those systems broadly work in one of three ways: some use an algorithm to fingerprint inappropriate footage, and then match it to future uploads; others track suspicious patterns of uploads, which is particularly useful for spam detection.

A third set of systems use the company’s machine learning technology to identify videos that breach guidelines based on their similarity to previous videos. The machine learning system used to identify violent extremist content, for instance, was trained on 2 million hand-reviewed videos.

YouTube said that automatic flagging helped the company achieve a goal of removing more videos earlier in their lifespan.

While machine learning catches many videos, YouTube still lets individuals flag videos. Members of the public can mark any video as breaching community guidelines. There is also a group of individuals and 150 organisations who are “trusted flaggers” – experts in various areas of contested content who are given special tools to highlight problematic videos.

Regular users flag 95% of the videos that aren’t caught by the automatic detection, while trusted flaggers provide the other 5%. But the success rates are reversed, with reports from trusted flaggers leading to 14% of the removals on the site, and regular users just 5%.

Human flaggers also spot a very different breakdown of videos to those reported by machines: more than half the reports from humans were for either spam or sexually explicit content.



