Jessica Guynn

USA TODAY

SAN FRANCISCO — Twitter says it's going on the offensive, using algorithms to seek out and clamp down on accounts that engage in abusive behavior, even if no one has reported the behavior.

The social media company made the announcement Wednesday as part of its latest efforts to curb abuse on Twitter.

It's a major shift for Twitter to take on some of the responsibility for policing abuse, rather than relying solely on victims to report it, as it has in the past.

Accounts found to be violating Twitter's rules will be temporarily penalized by having their functions and reach limited, such as only allowing their followers to see their tweets. The restriction is less severe than the current penalty for having an account marked as abusive: suspension or expulsion.

"We aim to only act on accounts when we’re confident, based on our algorithms, that their behavior is abusive. Since these tools are new we will sometimes make mistakes, but know that we are actively working to improve and iterate on them everyday," Ed Ho, vice president of engineering, said in a blog post.

Keyword muting, avoiding 'eggs'

Another big and frequently requested change: Twitter users will be able to mute certain keywords, phrases or entire conversations from their timeline for as long as they want: a day, a week, a month or indefinitely. Already people can mute those things from their notifications.

Twitter says it's also giving users more control over the notifications that they see from the types of accounts frequently created to harass fellow users. Those accounts tend not to have a profile picture (defaulting to the picture of an egg) or verified email addresses and phone numbers.

Finally, Twitter users who file reports about abuse or harassment, whether of themselves or someone else, will be notified when Twitter begins looking into the report and if Twitter takes action on the report.

Twitter says it's cracking down on hate speech

"Twitter has been proceeding carefully and thoughtfully in thinking through and rolling out tools designed to help harassment victims," said University of Maryland law professor Danielle Citron, who advises Twitter on these issues. "Those tools aim to put victims in the driver's seat but also tackle how overwhelming it can be when attacked by a cyber mob.The newest tool helps ensure that a harasser's provocations of others don't fill up victims' notifications."

The strategy is to let people vent as long as they don't violate Twitter's rules while shielding victims "from the emotional heartache that ensues when a harasser gets a crowd to join in on the attacks," said Citron, who is a member of Twitter's Trust and Safety Council.

With the new tools, Twitter is trying to strike a balance between safety and censorship, she said.

Twitter, a service known for its 140-character limit, was founded on the ideals of openness and free speech, but soon gained a reputation for limiting characters but not bad behavior. Stung by criticism that it allowed harassment and abuse to spread unchecked, the company has been making a flurry of updates to increase safety and well-being.

People don’t have to use their real names on Twitter. And with that anonymity has come racist, sexist and anti-Semitic taunts and even full-fledged campaigns from trolls, prompting the temporary departures of high-profile users such as Ghostbusters actress Leslie Jones after they were targeted by attacks.

Jack Dorsey hasn't fixed the trouble with Twitter

CEO Jack Dorsey has pledged "a completely new approach to abuse." Ho has said Twitter will keep working on combating abuse "until we’ve made a significant impact that people can feel."

Its reputation for abuse has taken a sharp toll on Twitter. Walt Disney Co. decided not to pursue a bid for Twitter, partly out of concern about bullying. As the company's user and revenue growth stagnated and public backlash increased, Twitter began to address complaints, rolling out more updates in the past few months than it has in the past few years.