Facebook has revealed that it’s downgrading content that makes dubious health claims, including posts that try to sell or promote “miracle cures.”

Big technology platforms have faced growing criticism over the spread of fake or misleading content. Reports emerged last year that Facebook had been featuring homemade cancer “cures” more prominently than genuine information from renowned organizations, such as cancer research charities. And a few months back, a separate report found that YouTube videos were promoting bleach as a cure for autism.

Facebook also recently said it would crack down on anti-vaccine content.

Fight against misinformation

The fight against digital misinformation is ongoing, and it isn’t limited to spurious health cures. Back in January, YouTube announced plans to curb conspiracy theory video recommendations, including claims that the moon landings were faked and the Earth is flat. At the time, YouTube also confirmed that it would reduce video recommendations for phony miracle health cures, another indication of the extent of its fake information problem.

Facebook’s latest announcement appears to have been in response to a Wall Street Journal investigation into the spread of bogus cancer treatments on both Facebook and YouTube, with Google’s video-streaming offshoot telling the publication that it has cut off advertising revenue for such videos.

“In order to help people get accurate health information and the support they need, it’s imperative that we minimize health content that is sensational or misleading,” Facebook product manager Travis Yeh wrote in a blog post.

Yeh said the company made “two ranking updates” last month to reduce the visibility of posts that exaggerate or sensationalize particular health-related remedies. Related to this, Facebook will specifically target posts that strive to sell products and services based on such claims — anything from a dubious cancer cure to a pill claiming to help a user lose weight.

“In our ongoing efforts to improve the quality of information in News Feed, we consider ranking changes based on how they affect people, publishers, and our community as a whole,” Yeh added. “We know that people don’t like posts that are sensational or spammy, and misleading health content is particularly bad for our community.”

As with other content-moderation initiatives on Facebook, the company is taking an automated approach to downgrading and has identified common phrases to “predict which posts might include sensational health claims or promotion of products with health-related claims,” Yeh said.

Of course, if past efforts are anything to go by, this could turn into a game of whack-a-mole — content creators and promoters typically find ways to circumvent such algorithmic sensors. Today’s news comes just a day after an independent audit found that Facebook’s policy on white nationalist content does not go far enough, as it only prohibits support for “white nationalism” in which specific terminology is used. “The narrow scope of the policy leaves up content that expressly espouses white nationalist ideology without using the term ‘white nationalist,’ the auditors wrote. “As a result, content that would cause the same harm is permitted to remain on the platform.”