SINGAPORE – Social media company Facebook has stood by its policy of not removing most content deemed to contain false information.

Speaking at the community standards forum here on Tuesday, Facebook’s global head of policy management Monika Bickert said they do not have a policy on taking down false content as it would be difficult to determine what is true or false in many cases.

“We don’t have a policy that you have to get your facts right on Facebook. For one thing, that would be extremely hard for us to police because we won’t necessarily know if a specific piece of information is true or not,” she said.

“There’s also a question on whether or not it’s appropriate for a private company to make a determination on whether something is true or false,” Bickert added.

The exceptions to this rule are false information that suppress voting rights and contribute to imminent violence, said Bickert.

“If there’s false information that is related to that, we will also remove that,” she added.

But the Facebook official said they have instituted reforms to address the spread of misinformation on the social media platform.

“Generally, our approach is surface related information and counter their virality because that is what social media brings into the equation – amplification and virality,” said Bickert. “We will generally counter the virality and surface educational content.”

Facebook has launched a third-party fact-checking initiative that taps media organizations to verify claims made in posts shared on the social media platform.

Those determined to contain false information are “down ranked,” lessening their visibility on the news feed.

Suggested articles that provide additional context and background also appear next to the questionable article.

While Facebook does not take down false content, the company has suspended some accounts that shared them for violation of other policies, such as coordinated inauthentic behavior and sharing of spam.

Its community standards report released last Friday revealed that over three billion spammy content were removed from the platform between January to September this year.

Over 2.1 billion fake accounts were also removed during that period.

“Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk,” said Facebook vice president for product management Guy Rosen.

“Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake acc­­­­ounts on Facebook remained steady at three to four percent of monthly active users,” he added.

Other content removed include violations on adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation, hate speech, terrorist propaganda and violent and graphic content.

“Overall, we know we have a lot more work to do when it comes to preventing abuse on Facebook. Machine learning and artificial intelligence will continue to help us detect and remove bad content,” said Rosen.

“Measuring our progress is also crucial because it keeps our teams focused on the challenge and accountable to our work. To help us evaluate our process and data methodologies, we have been working with the Data Transparency Advisory Group, a (company) of measurement and governance experts. We will continue to improve this data over time, so it’s more accurate and meaningful,” he added.