YouTube detailed the work it put into removing harmful content last quarter, revealing over 100,000 individual videos were removed.

That number represents a 5x increase in video removals from Q1 to Q2. In addition, over 17,000 channels and 500 million comments were removed in the same span of time.

The reason for such significant spikes is due to the removal of older videos, channels, and comments that were allowed before YouTube’s new hate speech policy was introduced in early June.

YouTube says it uses machine learning technology to catch harmful content before its ever viewed by users. Over 87% of videos removed in the second quarter of 2019 were first flagged by YouTube’s automated systems.

Further, an update to YouTube’s spam detection systems lead to an over 50% increase in the number of channels removed for violating spam policies.

Noting that it’s determined to continue reducing the visibility of videos that violate its policies, Google has tasked 10,000 people companywide with detecting, reviewing, and removing content that violates YouTube’s guidelines.

This report from YouTube is the first in what will be four instalments dedicated to the company’s four principles:

Remove content that violates policies

Raise up authoritative voices

Reward eligible creators

Reduce the spread of borderline content

YouTube will provide more detail on the work it’s done to support these principles over the coming months.