Facebook has expanded its definition of terror organizations as it's being pressured to respond to the growing threat of white supremacist “lone wolf” shooters, like the one who carried out the mass shooting in Christchurch, New Zealand.

“While our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify,” Facebook stated in a blog post Tuesday.

The Christchurch shooting, which was livestreamed on the platform, “strongly influenced" Facebook's policy update, the company says, calling it a “terrorist attack.”

In addition, Facebook, which previously focused on removing content from organizations like ISIS and al-Qaeda, announced it has banned more than 200 white supremacist groups. It used a combination of AI and human moderators to identify the extremist groups based on behavior and not ideology. Over the past two years, the social media giant has removed 26 million pieces of content related to terrorist groups like ISIS and al-Qaeda, 99 percent of which was deleted before being flagged by a user.

As the New York Times points out, Facebook announced the policy changes and removal data just one day before it's scheduled to join Google and Twitter for a hearing on Capitol Hill about how tech companies are dealing with violent content.

Facebook also notes it will be expanding a program it started in March that provides resources and support on how to leave hate groups to its users worldwide. The company says it has a team of 350 people “with expertise ranging from law enforcement and national security, to counterterrorism intelligence and academic studies in radicalization.”

Some of these updates have been rolled out over the past few months; others have been company policy for a year but have not been “widely discussed.”