Facebook has begun assigning ‘trust ratings’ to users as part of its attempts to fight ‘fake news’ and help identify ‘malicious actors’ on the platform — but it won’t tell users how well or how poorly they have scored.

The company has revealed that it is tracking the behavior of its users and using that information to assign them individual trust ratings, the Washington Post reported, admitting also that the details surrounding how the new credibility system works are “highly opaque”.

Read more

The social media giant said earlier this year that it was rolling out the trust-rating system for media outlets on the platform, which aimed to rank news websites based on the quality and trustworthiness of their output — and in turn to rank the posts of better-rated or more ‘trustworthy’ news sites higher on users’ feeds.

As part of the new rating process, Facebook watches how individual users interact with articles posted on the platform, according to Tessa Lyons, who is heading up the platform’s fight against fake news. If someone gives feedback that an article is false and it is proven to be false later by a third-party fact-checker, that person will be taken more seriously in comparison to someone who “indiscriminately” provides incorrect fake news feedback, she said.

But there are obvious pitfalls to this kind of system, with Lyons admitting that it is “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.”

It is not clear what individual user ratings are used for by Facebook beyond that example. It is also not known how exactly the ratings are assigned and whether every Facebook user has one.

Lyons said the ratings are not meant to be “an absolute indicator” of credibility, but that it is merely “one measurement among thousands of new behavioral clues” used by Facebook to understand individual behavior, which is hardly any more comforting.

A. This is literally an episode of @blackmirror. B. Facebook rating people on trustworthiness is like Harvey Weinstein running a sexual violence awareness campaign. https://t.co/x7hLgqdeQS — Brianna Wu (@Spacekatgal) August 21, 2018

The Facebook rating system was quickly compared online to the invasive “social credit” system being developed by the Chinese government which will become mandatory from 2020 and will use social media to analyse the online habits of its citizens, giving each a score. Under that system, individual citizens’ scores will even be used to determine whether they can take out a loan or use public transport.

Asked to divulge any of the other indicators Facebook uses to rate users’ credibility beyond how they interact with articles, Lyons declined, saying that it could lead to the system being gamed.

But concerns abound over Facebook’s ability to properly determine what constitutes ‘fake news’ and credibility on its platform and what does not. The platform has been criticized, for example, over its teaming up with the Atlantic Council think tank, which is funded by a number of NATO governments and arms manufacturers, as part of its efforts to fight inauthentic content.

The social media giant was forced to restore one of the pages belonging to Latin American news channel Telesur last week, after uproar followed an unexplained deletion of the page.