Computers can help test stories in all three of these ways. Vetting stories solely by the publications they appear in risks oversimplifying the web by boiling it down into just “good” and “bad” sources—but it’s very easily done. New York magazine’s Brian Feldman cooked up a Chrome extension that uses a modified version of a media professor’s list of “fake, false, regularly misleading, and otherwise questionable ‘news’ organizations” to blacklist certain domains on the web. When you visit one of the blacklisted sites with the extension installed, your browser pops up a warning message. (For a taste of why this sledgehammer approach isn’t ideal, scroll through some of the comments on the extension’s download page.)

Watching a story or a piece of information make its way across a social network is a more nuanced way of evaluating its trustworthiness—but it also demands more resources. This is where computers really come in handy. In 2010, a trio of researchers at Yahoo (including Castillo, who worked there at the time) studied how Twitter users in Chile responded to an enormous earthquake that rattled some of the country’s most populated regions. They found that tweets containing false rumors were far more likely shared skeptically—that is, with commentary that denies the rumor or questions it—than tweets that contained confirmed truths.

In a follow-up paper published the next year, the researchers broadened their findings to show the sorts of characteristics that indicate a piece of information being shared is true: Factually accurate tweets about a news story are generally retweeted by users who’ve tweeted a lot in the past, for example, and by users who have more followers. Another study, this one from researchers at Indiana University Bloomington, found that examining the relationships between users that tweet a particular story can reveal whether it’s spreading organically, or if it’s being helped along by bots and offline collusion.

“The task here is to discover anomalies in the way a content is propagating that makes it different from the way in which real contents propagate,” Castillo wrote in an email.

(Facebook is using a version of social-media vetting on its site right now, as I detailed in a story last month. It uses algorithms to look for posts that get a lot of pushback from a person’s Facebook friends—think comments with links to Snopes articles debunking the poster’s claim—and suppresses future posts with the same link. Mark Zuckerberg has written that Facebook is considering implementing “technical systems to detect what people will flag as false before they do it themselves,” but a spokesperson for the company wouldn’t comment on its progress toward that goal.)

Rather than waiting for a rumor or a story to spread across the internet to tell whether or not it’s true, some engineers and researchers are trying to analyze news in real time. Full Fact, a UK-based fact-checking outfit, published a report this summer outlining the state of the art in fact-checking. It was optimistic about the future of computer-aided fact-checking, sporting a cover emblazoned with the phrase, “How to make fact-checking dramatically more effective with technology we have now.”