Facebook on Thursday released a report detailing the company's efforts to keep its platform clear of fake accounts, abusive material, illegal activity, spam, and other nefarious content.

It said it banned 2.19 billion fake accounts in the first quarter of 2019, up from 1.2 billion in the fourth quarter of 2018.

"The amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time," the company's vice president of integrity, Guy Rosen, wrote in a blog post.

Facebook is also touting improvements in its ability to detect content like hate speech for preemptive action.

Facebook says it banned a staggering 2.2 billion fake accounts in the first three months of 2019 — almost as many as the number of real people it says use the social network.

On Thursday, the Silicon Valley tech giant released the third edition of its Community Standards Enforcement report, a public report that details the company's efforts to keep its platform clear of fake accounts, abusive material, illegal activity, spam, and other nefarious content.

It details a striking jump in the number of fake accounts it took action against: 2.19 billion were banned in the first quarter of 2019, up from 1.2 billion in the fourth quarter of 2018. "The amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time," the company's vice president of integrity, Guy Rosen, wrote in a blog post.

The data illustrates the sheer volume of malicious activity still present on Facebook's platform. The company reported 2.38 billion genuine monthly active users at the end of March.

The number of posts Facebook identified as hate speech also continued to climb — it removed 4 million such posts in the most recent quarter, up from 3.3 million in the previous three months and from 2.5 million in the first quarter of 2018. Facebook said its ability to proactively detect this content had also improved, with 65.4% of it detected by the company's systems and processes, up from 58.8% the previous quarter.

Facebook is touting its improved detection capabilities as a success — allowing it to take action against problematic or illegal content more quickly, before it filters out into the network and causes issues.

"In six of the policy areas we include in this report, we proactively detected over 95% of the content we took action on before needing someone to report it," Rosen wrote. "For hate speech, we now detect 65% of the content we remove, up from 24% just over a year ago when we first shared our efforts. In the first quarter of 2019, we took down 4 million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions."

Got a tip? Contact this reporter via encrypted messaging app Signal at +1 (650) 636-6268 using a non-work phone, email at rprice@businessinsider.com, Telegram or WeChat at robaeprice, or Twitter DM at @robaeprice. (PR pitches by email only, please.) You can also contact Business Insider securely via SecureDrop.