Image copyright Reuters

Twitter says it has suspended more than 125,000 accounts since mid-2015 "for threatening or promoting terrorist acts".

In a blog, the US-based firm said the accounts "primarily related to ISIS" (the so-called Islamic State group).

"We condemn the use of Twitter to promote terrorism," it said, adding that it had increased its report reviewing teams to react faster.

Twitter has more than 500 million users around the world.

Image copyright Twitter

"We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter," the company said.

It added that it was co-operating with law enforcement bodies "when appropriate" as well as other organisations.

Governments around the world - including the US - have been urging social media companies to take more robust measure to tackle online activity aimed at promoting violence.

Analysis - Dave Lee, BBC North America technology reporter in San Francisco

The negative way of looking at this situation is that Twitter's problem with terrorism-related posts is a lot worse than we thought.

A study towards the tail-end of 2014 estimated that around 46,000 accounts had been used to post extremist material, and so in just over a year that number has rocketed.

But of course, the positive way of looking at it is that Twitter is seemingly on top of the issue and taking it seriously. It's doing what it can to make sure the public knows this, at a time when many in government are hitting Silicon Valley companies with large doses of "surely something can be done" rhetoric.

The big question is what happens next. Terrorists will carry on making more accounts, as well as migrating to other platforms.

And questions will be raised about the removal process. Who decides? Who's keeping watch? The definition and perpetrators of terrorism can change depending on your geography and political views.

Twitter will now be asked: why not fascist tweets? Or anti-Israel? Anti-Palestine? Anti-women? Anti-[insert cause here]?

In December, US politicians put forward a bill that would force such companies - including Twitter and Facebook - to report any apparent terrorist activity they find.

EU officials have also been calling for talks with major social media firms to discuss the issue.

In March, Facebook revamped its "community standards" to include a separate section on "dangerous organisations".

It said it would ban groups promoting "terrorist activity, organised criminal activity or promoting hate."