Spam and fake accounts were the most prevalent and in the first quarter of this year, Facebook removed 837 million pieces of spam and 583 million fake accounts. Additionally, the company acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million examples of hate speech and 1.9 million pieces of terrorist content.

In some cases, Facebook's automated systems did a good job finding and flagging content before users could report it. Its systems spotted nearly 100 percent of spam and terrorist propaganda, nearly 99 percent of fake accounts and around 96 percent of posts with adult nudity and sexual activity. For graphic violence, Facebook's technology accounted for 86 percent of the reports. However, when it came to hate speech, the company's technology only flagged around 38 percent of posts that it took action on and Facebook notes it has more work to do there. "As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse," Facebook's VP of product management, Guy Rosen, said in a post. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important."

Throughout the report, Facebook shares how the most recent quarter's numbers compare to those of the quarter before it, and where there are significant changes, it notes why that might be the case. For example, with terrorist propaganda, Facebook says its increased removal rate is due to improvements in photo detection technology that can spot both old and newly posted content.

You can read the full report here and Facebook has also provided a guide to the report as well as a Hard Questions post about how it measures the impact of its enforcement. "This is a great first step," the Electronic Frontier Foundation's Jillian York told the Guardian. "However, we don't have a sense of how many incorrect takedowns happen -- how many appeals that result in content being restored. We'd also like to see better messaging to users when an action has been taken on their account, so they know the specific violation."

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too," wrote Rosen. "This is the same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback."