Facebook will have some of its content moderators focus only on hate speech, a company executive said Thursday.



The company saw a "steep increase" in fake and abusive accounts in the last six months. It removed 3 billion accounts during the October through March period, but many more likely fell through the cracks.

An executive called its hate speech policy "one of the toughest policies to enforce."



Facebook will have some content moderators focus entirely on removing hate speech as the social network struggles to get fake and abusive content on its platforms under control. The company's vice president of global operations, Justin Osofsky, made the announcement on Thursday as Facebook issued its annual report on hate speech and fake accounts.

The company's policies on hate speech is "one of the toughest policies to enforce," Osofky said. A term that would be considered a slur in many contexts could also be "a joke used self-referentially about the bigotry one has experienced,." he said. That makes policing hate speech a challenge for the company's artificial intelligence systems and its army of human moderators, who try to take down content that doesn't meet Facebook's policies across the board.

Get Breaking News Delivered to Your Inbox Click here to view related media. click to expand

The company removed 7 million posts, photos and other content that broke its hate speech rules, it said.

Fake accounts double



Facebook said it saw a "steep increase" in the creation of abusive, fake accounts in the past six months. The company took down more than 3 billion fake accounts from October to March, twice as many as it did in the previous six months.

While most of these fake accounts were blocked "within minutes" of their creation, the company said this increase of "automated attacks" by bad actors meant not only that it caught more of the fake accounts, but that more of them slipped through the cracks.

As a result, the company estimates that 5% of its 2.4 billion monthly active users are fake accounts. This is up from an estimated 3% to 4% in the previous six-month report.

The increase shows the challenges Facebook faces in policing spam, fake news and other objectionable material. Even as Facebook's detection tools get better, so do the efforts by the creators of these fake accounts. Facebook attributed the spike in the removed accounts to "automated attacks by bad actors who attempt to create large volumes of accounts at one time." The company declined to say where these attacks originated, only that they were from different parts of the world.

Facebook sets restrictions on livestreams

Facebook employs thousands of people to review posts, photos, comments and videos for violations. Some things are also detected without humans, using artificial intelligence. Both humans and AI make mistakes and Facebook has been accused of political bias as well as ham-fisted removals of posts discussing —rather than promoting — racism.

CEO Mark Zuckerberg has called for government regulation to decide what should be considered harmful content and to rule on other issues. He reiterated this call on Thursday, adding that different countries or regions could have different standards depending on "culture."

No checks, no balances



A thorny issue for Facebook is its lack of procedures for authenticating the identities of those setting up accounts. Only in instances where a user has been booted off the service and won an appeal to be reinstated does it ask to see ID documents.

While some have argued for stricter authentication on social media services, the issue is thorny. People including U.N. free expression rapporteur David Kaye say it's important to allow pseudonymous speech online for human rights activists and others whose lives could otherwise be endangered.