The newly released third edition of Facebook’s Community Standards Enforcement report found that five percent of monthly active accounts registered on the social media website between October 2017 and March 2019 were fake.

This represents a one-to-two percentage point increase in fake account “prevalence” since the second edition of the transparency report was published last November. That earlier report found that only three-to-four percent of monthly active accounts were faked from a period of October 2017 through September 2018.

From January through March 2019, Facebook took action on 2.19 billion fake accounts, meaning the site applied a warning screen, removed content, and/or deleted the account. This sets a new high water mark for this particular metric since Facebook launched its first-ever Community Standards Enforcement report one year ago. And it’s close to double the 1.2 billion actions taken October to December 2018.

“We’re focused on… making sure we improve our ability to catch more in the coming quarters,” said Guy Rosen, VP of integrity and product management, in a press call. (Rosen also authored a summary of the transparency report, available here.)

Facebook notes that such high numbers are in large part the result of simplistic attacks launched by unsophisticated actors – many of them spammers – who try to create millions upon millions of fake accounts all at once. Such cases are typically caught and stopped before they are ever viewed by users, notes Facebook VP of Analytics Alex Schultz in an online post explaining how his company measures fake accounts.

Over the last six months, Facebook has managed to take action on fake accounts between 99.7 and 99.8 percent of the time before a user can report the problem, according to the report.

“We remain confident that the vast majority of people and activity on Facebook are genuine,” said Schultz, noting that not every single fake account that gets past Facebook’s AI defenses is abusive or malicious. (For instance, a fake account could be the result of a user setting up a profile for his cat.)

Meanwhile, the company took action on spam 1.76 billion times last January through March, and 1.75 times in the three months prior. In 99.9 percent of these cases, Facebook discovered the problem before users could report the issue.

The Community Standards Enforcement Report also offers statistics for instances of adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation, hate speech, regulated drugs, regulated firearms, terrorist propaganda, and violence and graphic content.

In the press call, Facebook CEO Mark Zuckerberg said that the next report will also include Instagram, and that as of next year the company will begin to publish the reports quarterly “because I think the health of the discourse is just as important as any financial reporting we do, so we should do it just as frequently.”

“Understanding the prevalence of harmful content will help companies and governments design better systems for dealing with it, and I believe that every major internet service should do this,” Zuckerberg continued.

Zuckerberg also acknowledged that efforts to introduce end-to-end encryption services to Facebook Messenger and Instragram could make identifying malicious behavior more difficult to spot, but added that such consequences are worth it because of the privacy advantages.

“We do believe encryption is an incredibly powerful tool for privacy and we are working to detect bad actors through things like identifying patterns of bad activity or building better tools for people to report bad content to us,” said Zuckerberg. “And we recognize that it’s going to be harder to find all of the different types of harmful content. We’ll be fighting that battle without one of the very important tools which is, of course, being able to look at the content itself… But we think that this trade-off of protecting people’s privacy and giving people world-class tools for privacy and security… is the right path forward.”

Zuckerberg also said the company is moving forward with plans to create an independent Oversight Board that will review appeals filed by users whose content is removed from Facebook, perhaps unjustly.