Online harassment and hate speech have long festered on Twitter, but the incidents appeared to rise during the presidential campaign. Exchanges between supporters of President-elect Donald J. Trump and Hillary Clinton grew personal and acrimonious. Many of Mr. Trump’s supporters also relied on a series of images — some anti-Semitic and others quietly coded as racist — to circulate hate speech on Twitter.

Since Mr. Trump’s victory last week, Twitter has been filled with reports of racist and derogatory taunts against minorities. Many users have expressed fear and concern about the escalation of such behavior. When asked about harassment of minorities, Mr. Trump told “60 Minutes” that his supporters should “stop it.”

Twitter has not had a comprehensive response for dealing with hate speech, largely because the company did not want to limit freedom of expression on the service. But over time, Twitter has rolled out measures to tackle the problem. It has let people mute the accounts of other users, effectively making their content disappear from view. Last year, it issued an explicit prohibition against hateful conduct.

The company is now taking more action. It is letting people more specifically block out what they do not want to see on the service, including muting words, phrases and even entire conversations. Twitter is also making it easier for people to report abusive behavior, even if they are only bystanders to the abuse, and for the company to evaluate those reports. And it has overhauled its approach to training support teams, holding special sessions on cultural and historical context for hateful conduct.

“Someone looking at user complaints in Asia may not recognize something happening in the E.U. or the U.S. as hateful,” Ms. Harvey said. “We need to make sure there is a universal familiarity with the most common trends and themes we’re seeing that are abusive, but may not seem so at first glance.”