On one hand, Facebook - the world's biggest social media company - represented that it has just under 2.4 billion monthly active users from across the globe.

On the other hand, Facebook on Thursday said it has removed a record 2.2 billion fake accounts in the first quarter, which while putting the former number to doubt as it is almost as big as Facebook's entire universe of "active" users, also demonstrates how the company is battling an avalanche of what Bloomberg called "bad actors" trying to undermine the authenticity of the world’s largest social network... alternatively one wonders just how credible is the 2.4 billion MAU number when there is such an onslaught of fake accounts.

In the last quarter of 2018, Facebook disabled just over 1 billion fake accounts and 583 million in the first quarter of last year. According to Facebook, the vast majority of such "fake" accounts are removed within minutes of being created, so they’re not counted in Facebook’s closely watched monthly and daily active user metrics, although there is no way to verify any of these claims of course.

Facebook also shared a new metric in Thursday’s report: the number of posts removed that were promoting or engaging in drug and firearm sales, and in the first quarter of 2019, Facebook pulled more than 1.5 million posts from these categories. According to Bloomberg, Tuesday’s report "is a striking reminder of the scale at which Facebook operates - and the size of its problems."

The releases were made as part of the company's third ever content transparency report, a bi-annual document that outlines Facebook’s efforts to remove posts and accounts that violate its policies. As part of the report, it said that it was getting better at finding and removing other troubling content like hate speech in the process.

There were some additional insights into its AI algorithms, which apparently work well for some issues, like graphic and violent content; as a result Facebook detects almost 97% of all graphic and violent posts it removes before a user reports them to the company. On the other hand, Facebook is still terrible at detecting the type of graphic or violent content that really matters, such as the "promotional" variety used in live videos, a "blind spot" that allowed a shooter to live broadcast his killing spree at a New Zealand Mosque earlier this year.

The company said that its software also hasn’t worked as well for more nuanced categories, like hate speech, where context around user relationships and language can be a big factor. It is also why Facebook has cracked down on all aspects of speech and expression, having hired third party scanners, although it has now been documented that a majority of Facebook's "banned" content tends to come from conservative sources.

Still, Facebook says it’s getting better. Over the past six months, 65% of the posts Facebook removed for pushing hate speech were automatically detected. A year ago, that number was just 38%.

Finally, as part of a blog post published earlier today by Alex Schultz, VP of Analytics, discussing "How Facebook Measures Fake Accounts?", the company made the following representations: