After discussions that reportedly took more than a year, Twitter is finally rolling out the ability to mute keywords in your notifications. The move, which comes after a series of high-profile abuse incidents that may have scuttled Twitter’s effort to sell itself, could begin to shield users from some of the worst harassment they face on the platform.

Starting today, you can go into your notification settings and manually add items you would like to mute. These can include words, phrases, usernames, emoji, or hashtags. After they’re added, tweets containing the muted items will not be sent to you as push notifications or appear in your notifications tab. They may still appear in search results, however. Twitter says it will eventually bring muting to unspecified other parts of the platform.

The update also lets you mute any conversations you’ve been dragged into against your will. To leave one of these so-called “canoes,” tap the down arrow on a tweet in the conversation, tap “mute the conversation,” and you’ll stop seeing updates. (You can undo your choice later if you like.)

Twitter has also changed its harassment reporting system so that users can now more easily report “hateful conduct,” which the company defines as behavior “that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.”

Behind the scenes, the company says it has re-trained its support team on the cultural and historical context of the types of abuse directed at harassment victims on Twitter. Most of the team members are based internationally and often did not understand why some reported content was considered offensive, Twitter said.

“We haven’t always moved as quickly as we would have liked to.”

Del Harvey, who leads Twitter’s trust and safety efforts, acknowledged that keyword muting was a long time in coming. (Instagram, which seems to attract far less abuse by comparison, introduced the feature in August.) “We haven’t always moved as quickly as we would have liked to, and we haven’t always done as much as we would have liked to,” Harvey said. “Part of that is we’re trying to be very thoughtful about the decisions we make, and make sure there aren’t unintended and negative consequences.”

In the case of keyword muting, it’s hard to imagine a negative consequence of people blocking slurs from their notifications. And it’s easy to see the negative consequences suffered every day by Twitter users facing harassment. Harvey said that more anti-abuse features are coming — and that they will be coming more quickly than we are used to from Twitter.

“This is something that we are really passionate about, and really passionate about getting right,” she said.” I think the increased focus on it across the whole of the company is going to become more and more obvious over the coming months. We want to get this stuff right.” Here’s hoping.