The decision is part of an ongoing effort by the company to better protect its users. In 2018, people from approximately 30 countries sent in 8,000 responses to a survey the company conducted. The most consistent feedback the company got was that it needed to make the language of its harmful conduct policy clearer. People also told the company it needed to enforce its policies more consistently. In response, Twitter claims it has developed a more in-depth training process for the employees who review abuse reports. It also says it's spending more time testing new rules to determine if it needs to clarify them.

Moving forward, the company plans to work with a group of outside experts to decide on how it should approach hate speech related to topics like race, ethnicity and national origin. "This group will help us understand the tricky nuances, important regional and historical context," the company said.

The policy update follows technological investments the company has made to help on the safety front. As of late last year, the company said it was able to spot and remove 50 percent of abusive tweets before they were flagged by users thanks to updates to its moderation algorithms.