A lie can travel halfway around the world before the truth has got its boots on, or so the saying goes, and new research has sought to prove just how long it takes fact checking to catch up.

On average, it takes more than 12 hours for a false claim to be debunked online, according to two recent projects that compared how falsehoods and truths spread.

One study analyzed rumors on Twitter and found that a rumor that turns out to be true is often resolved within two hours of first emerging. But a rumor that proves false takes closer to 14 hours to be debunked.

Another study looked at how long it took for a fact check or debunking article to be published as a counter measure to a fake story. It found “a characteristic lag of approximately 13 hours between the production of misinformation and that of fact checking”.

The studies used different methodologies and look at different elements of the online rumor and misinformation ecosystem. But they both provide evidence that falsehoods spread for hours and take hold online before being debunked.

Both research groups say their findings highlight the need for better — and especially faster — approaches to countering online misinformation.

Project One: Rumorous Tweets

A group at Warwick University gathered true and false rumors related to nine recent news events, including the a shooting on Parliament Hill in Ottawa, the Charlie Hebdo shootings, and claims that emerged and spread when Russian President Vladimir Putin was not seen publicly for 10 days. (The BBC reported that “His disappearance from public view had sparked rumours that he might have fallen ill, died, been removed in a coup, or once again become a father.”)

They identified 330 rumors and gathered close to 5,000 tweets that mentioned the rumors. The team analyzed the content of the tweets to determine how they characterized the rumor — did they confirm, debunk or simply repeat the claim? They also examined whether the tweet was sent before or after the rumor’s veracity had been established. Not all 4,842 tweets were examined by hand, however, instead selecting a subset of tweets for each rumor case study, with an emphasis on tweets that garnered a high number of retweets.

The researchers also had a group of journalists evaluate and annotate 2,695 rumorous tweets. The below table shows the number of rumors for different breaking news stories, and the number of threads annotated by journalists:

Here’s an example of an annotated Twitter thread relate to the siege of a Sydney coffee shop:

One conclusion from the data is that true rumors are resolved much faster on Twitter than false rumors.

“While the median true rumour is resolved in about 2 hours, the median false rumour takes over 14 hours to be resolved,” they write.

They also found that “tweets reporting unverified rumours are more widely spread”. This aligns with previous research about how online news websites report on rumors: an unverified rumor holds more interest and generates more traffic and social shares. And a new rumor that offers the possibility of being true sparks immediate interest.

“Our analysis shows that tweets reporting unverified rumours spark a sudden burst of retweets within the very first minutes, showing that users tend to share early, unverified reports rather than later confirmations or debunks,” the researchers wrote.

They also found that tweets sent in support of an unverified claim tend to generate the most retweets. This also aligns with the other finding that a (potentially) true rumor is more interesting than a false one.

“We find that social media users generally show a tendency towards supporting rumours whose veracity is yet to be resolved,” the researchers wrote.

Project Two: Spread and Debunking of Fake News

The previous study at Warwick University relied on humans to examine and code data – labor-intensive work which is necessary to ensure each tweet is properly annotated.

At Indiana University, however, they are working to build Hoaxy, an automated system to analyze the spread of false narratives and the debunkings of them.

“Right now it’s a tool to give us a way to see how the competing information — misinformation and fact checking —spread online,” says Fil Menczer, one of the leaders of the project which also runs the Truthy.

The system is in its early stages. It can gather and index the content published by a list of known fake news websites and other sources of misinformation. Then it does the same for a set of fact-checking and debunking sites. They then match a debunk to the originating story and study how each piece of content spreads through different networks.

“At first it will just be something where we can look at this network and extract some statistical features,” Menczer says. “The people who talk about a piece of fake news, and those who say it’s fake — are they far away in the network?”

One finding, as previously noted, is that there is a lag of roughly 13 hours between the publication of a false report and the subsequent publication of a debunking. Menczer cautions this is an early determination based on the initial data gathered, and it requires more research.

But along with the temporal figure, their overall finding so far is that there is clearly more misinformation being produced, and at more scale, than the related debunks. The below graph, for example, shows the volume of tweets about fake news stories versus the volume of fact-checking tweets:

“We find that, in absolute terms, misinformation is produced in much larger quantity than fact-checking content,” they write in a paper about the research.

There is a small industry of fake news websites which publish fake content on a daily basis, aimed at generating and monetizing web traffic. While fact checking is a growing field, it still produces less content on average than the fakers. It can’t keep up.

Menczer says in the research they identified a large number of Twitter accounts that were consistently posting fake news content.

“With fake news there are users who post a very large number of tweets, so in my opinion it could be people associated with the fake news sites, or bots, or accounts controlled by those people,” he said. “They are generating a lot of tweets but doing few RTs.”

By comparison, when they analyzed the accounts that tweet debunkings the researchers found that they tend to act much more like real humans. They retweet and quote reply. They interact, rather than pump out links.

“Taken together, these observations strongly suggest that rumor-mongering is dominated by few very active accounts that bear the brunt of the promotion and spreading of misinformation, whereas the propagation of fact checking is a more distributed, grass-roots activity,” they write.

Both projects recognize that more research is necessary to better understand the issue. But faced with the overwhelming flow of fake news and false information spreading across social media, and the fact false rumours travel faster and further than the truth, the small cadre of fact checkers are facing a struggle to keep up.