Twitter claims that new tools that can proactively flag problematic content surfaced 38 percent of abusive tweets for review. The social media platform didn't disclose how many abusive tweets there were in total, or whether that number increased from last year. This is the first attempt by Twitter to automatically flag tweets rather than just rely on user reports, and the company acknowledges that the technology its using is very much in its early stages. Hicks wrote that the same technology Twitter's been using to target spam, platform manipulation and other rule-breaking is being tested on abuse.

We're working hard to keep you safe. There's more work to do, but we want to share the results from our latest changes. https://t.co/tb8xCfOk1M — Twitter Safety (@TwitterSafety) April 16, 2019

"This time last year, zero percent of potentially abusive content was flagged to our teams for review proactively. Today, by using technology, 38 percent of abusive content that's enforced is surfaced proactively for human review instead of relying on reports from people using Twitter. This encompasses a number of policies, such as abusive behavior, hateful conduct, encouraging self-harm and threats, including those that may be violent," wrote Hicks.

Twitter has also caught on to the popular troll tactic of opening a new account following a suspension. The company reported that a total of 100,000 users were suspended between January and March 2019 after creating a new account, a 45 percent increase from the same period last year. Twitter also appears to be responding to abusive accounts faster. Three times more abusive accounts were suspended within 24 hours this year than the same period in 2018, the platform reported.

Doxxing users by unveiling their private information on Twitter has been another popular trolling tactic. Twitter claims that 2.5 times as much private information has been removed after it let users automatically report tweets for sharing personal information. While revealing personal information is technically a Twitter rules violation, the social media platform has been accused of turning a blind eye to doxing in the past.

The new metrics unveiled by Twitter, while promising, still don't give us a sense of how much there is left to tackle. More aggressive enforcement and reporting tools are meaningless if we don't know how many trolls are slipping through the cracks. Without a concrete idea of how much abuse is taking place on the platform, we have no idea if these tools are actually working.

Following Twitter CEO Jack Dorsey's promise last September that the platform's objective is to "increase the health of public conversation", the company has worked on making previously unchecked abuse and harassment a top priority. The platform has unveiled features like in-app appeals for suspensions, limits on daily follows, and new reporting tools to flag bots. Twitter users can expect some more changes soon as a part of the initiative. Updated rules that Twitter believes will be simpler and easier to understand will be coming in the next few weeks. An option to hide replies, which still requires users to do some work in policing their own accounts, will be coming in June.