Mark Zuckerberg, chief executive officer and founder of Facebook Inc., speaks during a joint hearing of the Senate Judiciary and Commerce Committees in Washington, D.C., U.S., on Tuesday, April 10, 2018. Al Drago | Bloomberg | Getty Images

Facebook has stepped up its fight against fake accounts. On Thursday, in its third periodic Community Standards Enforcement Report, the company said it took action on nearly twice as many suspected fake accounts in the first quarter of 2019 as it did in the fourth quarter of 2018. The uptick was due to "automated attacks by bad actors who attempt to create large volumes of accounts at one time," the company said. On a call discussing the report, Facebook CEO Mark Zuckerberg responded to calls to break up his company through antitrust, saying it would hurt Facebook's efforts to combat fake news and other content that violates its policies.

"The amount of our budget that goes toward our safety systems is greater than Twitter's whole revenue this year," said Zuckerberg on a call on Thursday. "We're able to do things that I think are just not possible for other folks to do." Specifically, Facebook disabled 2.19 billion accounts in the first quarter of 2019 compared to 1.2 billion accounts in the fourth quarter of 2018. That's a huge number of accounts considering Facebook reported 2.38 billion monthly active users (MAUs) in its first quarter of 2019. A Facebook spokesperson said the number of accounts it disabled is not included in its MAU figure since the obvious fakes tend to be removed fairly quickly. Still, Facebook estimated that about 5% of the accounts counted in monthly active users are fake. The latest report comes after Facebook in March announced a pivot to privacy that will eventually shift more of users' communications to private, encrypted channels via the chat functions of Instagram, Messenger and WhatsApp. Zuckerberg on Thursday said that this pivot will make it harder for Facebook to find and remove the type of content covered in the Thursday report. "We'll be fighting that battle without one of the very important tools, which is of course being able to look at the content itself," Zuckerberg said. "It's not clear on a lot of these fronts that we're going to be able to do as good of a job on identifying harmful content as we can today."