Facebook’s latest transparency report shows a big jump in spam and violent content takedowns, some advances in proactively identifying hate speech, and the first numbers for bullying, harassment, and child sexual exploitation takedowns. The report was released as Facebook fends off numerous accusations of incompetent or underhanded behavior. In it, Facebook emphasizes its attempts to remove bad content before users ever see it, while fielding an ever-growing number of requests from governments.

This marks the release of Facebook’s second “community standards” transparency report, alongside the more traditional reports about copyright takedown notices and government data requests. Facebook took down far more pieces of unacceptable content — across every category — in July to September of 2018 than it did in the last quarter of 2017. We don’t have points of comparison for bullying and harassment or child sexual exploitation and nudity, but it respectively removed 2.1 million and 8.7 million pieces of content in each category. (The second category includes legal, non-sexual nudity, which still isn’t allowed on Facebook.)

Spam is by far the biggest category, and takedowns have consistently grown every quarter. But they jumped even more than usual in the past quarter, with Facebook removing 1.23 billion pieces of spam — compared to 957 million, 836 million, and 727 million the quarters before. Fake account takedowns have remained consistently high, with 754 million closed in the past quarter; Facebook says these are mostly spam, although it’s periodically removed accounts (usually in the dozens or hundreds) linked to political propaganda campaigns.

Spam is still by far the biggest category

One of the biggest relative jumps was violent content: Facebook removed 15.4 million pieces of violent content between June and September of 2018, versus 7.9 million between April and June, and a mere 1.2 million between October and December of 2017. Facebook has also gotten better at removing this content before users report it, claiming to proactively find more than 96 percent of material, compared to around 71 percent last year. It also says it proactively finds more than half the hate speech posts it takes down, compared to less than a quarter in late 2017.

Facebook is still fielding government requests for user data, which apparently increased around 26 percent between the last half of 2017 and the first half of 2018. It’s also started restricting slightly more material in specific countries, for a total of 15,337 pieces of content — a rise of around 7 percent. It’s been blocked by countries at a roughly similar rate as last year, but the vast majority of those disruptions this time seem to be concentrated in India.

Earlier this week, a group of human rights organizations asked Facebook to also release data about how often it restored content that it removed in error, part of a larger request for clearer and fairer “due process” for Facebook users who have posts or accounts restricted. Facebook hasn’t provided those so far, but it’s still offering more information than in previous years — although after yesterday’s news, that may not satisfy many people.