Here’s a system that might help, and it is based on something that Facebook already does to prevent the spread of fake news. Currently, Facebook asks independent fact-checking organizations from across the political spectrum to identify false and misleading information. Whenever users try to post something that has been identified as fake news, they are confronted by a pop-up that explains the problems with the news and asks them to confirm if they’d like to continue. None of these users are prevented from posting stories whose facts are in dispute, but they are required to know that what they are sharing may be false or misleading.

Facebook has been openly using this system since December 2016. Less openly, they have also been keeping tabs on how often its users attempt to flag stories as fake news, and, using this feature, they have been calculating the epistemic reliability of their users. The Washington Post reported in August that Facebook secretly calculates scores that represent how often users’ flags align with the analysis of independent fact-checkers. Facebook only uses this data internally, to identify abuse of the flagging system, and does not release it to users. I can’t find out my own reputation score, or the scores of any of my friends.

This system and the secrecy around it may come across as a bit creepy — and the public trust in Facebook has been seriously and justifiably damaged — but I think that Facebook is on to something . Last year, in a paper published in the Kennedy Institute of Ethics Journal, I proposed a somewhat different system. The key difference between my system and the one that Facebook has implemented is transparency: Facebook should track and display how often each user decides to share disputed information after being warned that the information might be false or misleading.

Instead of using this data to calculate a secret score, Facebook should display a simple reliability marker on every post and comment. Imagine a little colored dot next to the user’s name, similar to the blue verification badges Facebook and Twitter give to trusted accounts: a green dot could indicate that the user hasn’t chosen to share much disputed news, a yellow dot could indicate that they do it sometimes, and a red dot could indicate that they do it often. These reliability markers would allow anyone to see at a glance how reliable their friends are.

There is no censorship in this proposal. Facebook needn’t bend its algorithms to suppress posts from users with poor reliability markers: Every user could still post whatever they want, regardless of whether the facts of the stories they share are in dispute. People could choose to use social media the same way they do today, but now they’d have a choice whenever they encounter new information. They might glance at the reliability marker before nodding along with a friend’s provocative post, and they might think twice before passing on a weird story from a friend with a red reliability marker. Most important of all, a green reliability marker could become a valuable resource, something to put on the line only in extraordinary cases — just like a real-life reputation.

There’s technology behind this idea, but it’s technology that already exists. It’s aimed at assisting rather than algorithmically replacing the testimonial norms that have been regulating our information-gathering since long before social media came along. In the end, the solution for fake news won’t be just clever programming: it will also involve each of us taking up our responsibilities as digital citizens and putting our epistemic reputations on the line.

Regina Rini (@rinireg) teaches philosophy at York University in Toronto, where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition.

Now in print: “Modern Ethics in 77 Arguments” and “The Stone Reader: Modern Philosophy in 133 Arguments,” with essays from the series, edited by Peter Catapano and Simon Critchley, published by Liveright Books.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion).