When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Facebook now rates trustworthiness of users on a scale from 0 to 1

In the ongoing crusade against fake news and its impact on real life, politically or otherwise, Facebook has begun implementing a program by which it assesses the trustworthiness of a user and the content that they share, on a scale of zero to one.

Per Tessa Lyons, the Facebook product manager charged with curbing misinformation on the platform, this system has been put in place to account for users who falsely report content irrespective of its factual accuracy simply because they didn't align with their beliefs. In an email to The Washington Post, she says:

“[It's] not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.”

In a follow-up mail, Lyons further addresses why such a system has come into place:

"One of the signals we use is how people interact with articles. For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

Given Facebook forwards reported posts to third-party fact-checkers, she stresses the importance of building such systems to streamline the process. She goes on to explain that there's no one unified number assigned to every user in order to rank them on an absolute scale; rather, it's simply one more among thousands of metrics by which Facebook analyses usage and behavioral patterns of users.

There's not much transparency to speak of here, however, as it's unclear if all users are assigned such a score, or if it's limited to a specific set of them. Moreover, it's also unknown if there's any other criteria the company is using to evaluate users on this scale. Lyons declined to comment on this, unwilling to potentially tip off malicious parties.

This opacity of this entire operation is concerning for some. Per Claire Wardle, director of First Draft, a Harvard Kennedy School research lab and fact-checking partner of Facebook that studies the real-life impact of fake news:

"Not knowing how [Facebook is] judging us is what makes us uncomfortable. But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”

This move comes in the light of several tech companies ramping up efforts to purge political interference and malpractices off of their platforms not by simply removing proven bots, but by better understanding user intent through increasingly sophisticated, granular algorithms. It also must not be forgotten that algorithms of similar nature that have historically been used to mine user data in order to sell to ad publishers for revenue, so the ramifications such platforms are having on us now are more interesting (and morbid) than ever.

Source: The Washington Post

Report a problem with article
Next Article

Valve makes Windows games playable on Linux with new version of Steam Play

Previous Article

These are all the Xbox One bundles announced at Gamescom 2018

Join the conversation!

Login or Sign Up to read and post a comment.

25 Comments - Add comment