Jessica Guynn

USA TODAY

SAN FRANCISCO — With public backlash growing, Twitter says it's taking steps to crack down on hate speech, from making it easier to report alleged incidents on the social media service to educating moderators on what kind of conduct violates the rules.

Twitter users will also gain more control over their experience on Twitter with the ability to mute words and phrases, even entire conversations, if they don't want to receive notifications about them, said Del Harvey, Twitter's head of safety.

The effort comes as an uptick in biased graffiti, assaults and other incidents have been reported in the news and on social media since Election Day, prompting president-elect Donald Trump to call for people to "stop it" during a 60 Minutes interview on Sunday night. The FBI reports that hate crimes rose 7% in 2015, led by attacks on Muslim Americans.

It's also in response to escalating concern about abuse and harassment on Twitter that has stalled growth among users and advertisers alike.

Under chief executive Jack Dorsey, Twitter has pledged that "trust and safety" is among its top priorities. Earlier this year, the company improved its abuse reporting system and convened outside advisers on safety issues.

Yet little has changed. Watchdog groups say Twitter has for years failed to provide effective tools for its 317 million users to combat abuse, causing hate speech to spread on the platform.

"Hate speech is out of control on Twitter," said Heidi Beirich, spokeswoman for the Southern Poverty Law Center.

According to Brandwatch and anti-bullying group Ditch the Label, racist language is the most common form of hate speech on Twitter. Of the 19 million tweets analyzed by specific search terms, more than 7.7 million tweets contained racially insensitive language, the research found.

Jack Dorsey hasn't fixed the trouble with Twitter

In one of the highest-profile incidents, Twitter banned Milo Yiannopoulos, a technology editor at the conservative news site Breitbart, in July for engaging in a campaign of abuse in which hundreds of anonymous Twitter accounts bombarded Ghostbuster actress Leslie Jones with racist and sexist taunts.

A report in October from The Anti-Defamation League documented the rise in anti-Semitic tweets targeting journalists who covered Trump, the Republican presidential candidate. Words that appear frequently in the profiles of these Twitter accounts: Trump, nationalist, conservative, white.

After the election, the neo-Nazi website The Daily Stormer published a list of more than 50 Twitter users who had expressed fear about the outcome of the 2016 election, urging its readers to "punish" them with a barrage of tweets that would drive them to suicide.

Jennifer Soto Segundo, a Clinton supporter in Orlando, Fla., says she expressed her dismay at Hillary Clinton's loss on Twitter, only to be greeted the next day by violent tweets — one from a user urging her to step into a gas chamber and another calling for her deportation.

"We are nothing but frustrated with Twitter," Beirich said. "I am glad they're putting out new policies but, given their past history, there's not a lot to make me hopeful about Twitter unfortunately."

'Massive rise' in hate speech on Twitter during presidential election

Harvey, who has run safety at Twitter for eight years, says she's had a "front row seat" for the evolution of abuse and harassment on the social media service.

With the latest updates to its policies and features, Twitter is attempting to preserve free speech while drawing a hard line at "behavior that is intended to silence others," Harvey said.

"While we have taken steps over the years to try to combat abuse and harassment, we haven't moved as quickly as we would have liked or we haven't always done as much as we would have liked because we have tried to make sure we are not making decisions that have unintended, negative consequences and ramifications," she said.

White supremacists urge trolling Clinton supporters to suicide

HOW IT WORKS

First, Twitter users will now be able to report "hateful conduct" as a separate option. "Our hope is by having it as an explicit reporting option, it will make it easier for people to flag it to us and for us to take action on it as we can," Harvey said.

Separately, reporting hateful conduct will also help Twitter better process reports from bystanders, alleviating the burden on the person being subjected to the abuse, Harvey said.

Second, Twitter has retrained support teams, offering sessions that teach the cultural and historical context to help moderators recognize hate speech, and has put in place an ongoing program to refresh employees on abuse and update them on new forms of it, Harvey said.

"As we looked at the instances were were missing...one consistent theme has been: This is something where the person reviewing the report just didn't have the framework or the background to understand what it was referencing," she said. "So in order to address that, we actually developed really tailored resources to provide historical, cultural contextualization, for a lot of the common themes or phrases or terms that we were seeing and that we were seeing people not recognize."

In addition, Twitter is making changes to notifications, allowing users to mute words, phrases, emojis, even entire conversations, because, Harvey says, abuse "is acutely felt in notifications." Eventually, Twitter will expand the mute function to everywhere users see tweets. Twitter already allows users to mute accounts they don't want to see tweets from.

"Just to be clear: We are not saying by any means that as soon as this launches, the abuse issue will be solved on Twitter or that we are never going to get anything wrong again,' Harvey said. "This is sort of just another step for us on this path but we do think it's a really important one for people to have the ability to control the experience and shape it more."

Below is an edited Q&A with Harvey.

Q: Have you seen the volume of hate speech increase during the Trump candidacy?

A: The thing is that Twitter, there's just a lot of tweets on Twitter. It would be difficult to have any sort of measurement of it. The thing that we keep coming back to is that numbers don't matter, in that if there is the perception that there is an increase or if people feel like they are being subjected to more, that's something we have to fix no matter what.

Q: The Anti-Defamation League documented a rise of anti-semitism against journalists on Twitter. Are you saying that you don't have the ability to measure that?

A: I am saying that there are a whole bunch of different ways that you can measure these things and there's no one agreed upon way. I don't think we would feel we had succeeded just because we had some number go down. People have to feel safe and people have to feel like they have recourse available to them if they are being abused. That's what we have to try for. What we are trying to look at is how we can make sure that people know what is available to them and how they can make their experience the way they want it to be.

Q: Have you measured it?

A: What would you measure exactly? Because not all uses of a word are negative. We see a tremendous amount of counter speech on these concepts. We certainly look at volumes. We certainly look at trends. The data that's there has not actually been something that has been clear enough for us to use, as compared to, let's look at behavior, let's look at the actual words that you use.

Q: We noticed on election day that accounts people flagged that were trying to keep immigrants and people of color away from the polls were being suspended very quickly. Was there a special effort on election day?

A: We absolutely did do some work around combating some voter suppression type stuff that was happening, sure. And that's certainly an aspect of the work we have been doing. This is an ongoing effort. It's not as though, with the election now over, we're done, or it won't be a continued focus. This is an ongoing challenge and it's one we are going to keep pushing on.

Q: How do you deal with the presence, influence and increased activity of the alt right?

A: Honestly the way we are trying to address these challenges isn't specific to any group. If someone is engaged in behavior that violates our rules, around abuse and harassment or around hateful conduct, we're going to take action on them. That's independent of political affiliation or belief or anything else. Twitter, as you know, is a worldwide platform and there are challenges like this around the world in all sorts of forms. None of it is unique to any one party.