Four months after launching a program to fight violent extremist content, YouTube says it has become far more efficient at identifying and removing such videos, thanks to its machine learning technology.

In June, YouTube detailed four steps it would take to fight the rising tide of such objectionable content on its platform. Its announcement came amid a general backlash against the tech industry for its role in enabling hate-fueled and terrorist-related messages.

YouTube pledged to deploy machine learning to tackle the issue. In addition, it created a program to enlist the help of third-party experts, toughen standards for controversial videos, and support voices that are “speaking out against hate and extremism.”

The company published an update on these efforts on its blog today. Most notably, it said the investment in machine learning seemed to be paying dividends.

The company wrote that it “always used a mix of human flagging and human review together with technology” to help it spot violent content. The program introduced in June added machine learning to flag violent extremist content, which would then be reviewed by humans.

YouTube said that over the last month, 83 percent of violent extremist videos it removed were spotted without a human flag, up 8 percent from August. The company said its human teams have reviewed more than a million videos since June, adding new context and information to continue improving the machine learning efforts.

It also acknowledged that “as we have increased the volume of videos for review by our teams, we have made some errors. We know we can get better and we are committed to making sure our teams are taking action on the right content.”

“Terrorist and violent extremist material should not be spread online,” the blog post reads. “We will continue to heavily invest to fight the spread of this content, provide updates to governments, and collaborate with other companies through the Global Internet Forum to Counter Terrorism.”