Facebook said it removed 3.2 billion fake accounts from its service from April to September, up slightly from 3 billion in the previous six months. Nearly all of the bogus accounts were caught before they had a chance to become "active" users of the social network, so they are not counted in the user figures the company reports regularly. Facebook estimates that about 5% of its 2.45 billion user accounts are fake.

The company said in a report Wednesday that it also removed 18.5 million instances of child nudity and sexual exploitation from its main platform in the April-September period, up from 13 million in the previous six months. It said the increase was due to improvements in detection.

In addition, Facebook said it removed 11.4 million instances of hate speech during the period, up from 7.5 million in the previous six months. The company said it is beginning to remove hate speech proactively, the way it does with some extremist content, child-exploitation and other material.

Get Breaking News Delivered to Your Inbox

Facebook expanded the data it shares on its removal of terrorist propaganda. Its earlier reports only included data on al-Qaida, ISIS and their affiliates. The latest report shows Facebook detects material posted by non-ISIS or al-Qaida extremist groups at a lower rate than those two organizations.

The report is Facebook's fourth on standards enforcement and the first to include data from Instagram in areas such as child nudity, illicit firearm and drug sales, and terrorist propaganda. The company said it removed 1.3 million instances of child nudity and child sexual exploitation from Instagram during the reported period, much of it before people saw it.

In addition to fighting fake accounts, Facebook has unveiled new safety and transparency protocols in order to better safeguard the U.S. election process and ensure the misinformation campaign that rocked the 2016 election is not repeated.

Facebook was just one of the key players in the Russian effort to sow discord and dissension in the U.S. during the 2016 race. In the lead-up to the 2016 elections, Russian troll farms were allowed to troll and spread misinformation on Facebook's sites, with various actors using the platform to target vulnerable populations, discourage voting and stir white nationalism. The company received widespread criticism for not better preventing acts of foreign influence in the democratic process.

According to testimony by top intelligence chiefs, U.S. officials believed the Kremlin sought to directly interfere in the U.S. election and pave the way for a Trump presidency, and one tactic of the multi-pronged attack was a wave of misinformation.