When it comes to combating misinformation, research shows that it's more effective for authoritative figures to present accurate facts early and routinely alongside misinformation, rather than to try to negate every piece of misinformation after-the-fact by labeling it false or by calling it out as false.

Why it matters: The research provides a roadmap for more effective and efficient management of the coronavirus "infodemic" by health experts, government officials, internet platforms and news companies.

1. Proactive messaging: According to research from Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania and co-founder of FactCheck.org, gaps in the public's background knowledge about common sense flu cures, like whether vitamin C prevents viruses, show "ongoing need for effective communication of needed information long before a crisis."

In an interview with Axios, Hall Jamieson argues that health experts have done a good job proactively messaging to the population in the past about the benefits of hand-washing to prevent flu-like viruses, like the common colds.

As a result, the public was not susceptible to misinformation around hand-washing, like it was around vitamin C being a cure for coronavirus. (Vitamin C is still an unproven cure for the common cold, despite decades-old unproven myths that it is.)

2. Pre-bunking: Australian psychologist and professor Stephan Lewandowsky, who chairs the Cognitive Psychology department at the University of Bristol, argues that if people are made aware of the flawed reasoning found in conspiracy theories, they may become less vulnerable to such theories.

To an extent, this is similar to the strategies some social media companies have taken when posting warning labels about misinformation alongside content in a news feed that users can encounter before deciding whether to click into the article.

Lewandowsky notes in his new conspiracy theory handbook published in March that when it comes to certain content, like anti-vaccination conspiracy theories, pre-bunkings "have been found to be more effective than debunking" after-the-fact.

3. Label misinformation at the source level: In order to avoid chasing thousands, if not millions of pieces of misinformation during an "infodemic," Steven Brill and Gordon Crovitz, co-CEOs of NewsGuard, argue it's better to rate the sources of misinformation that are repeat offenders, like certain websites or authors, rather than pieces of content themselves.

"Any of the websites now promoting COVID-19 hoaxes, like 5G causes, were publishing hoaxes a few months ago about 5G causing cancer," says Crovitz. "It underscores the importance of labeling misinformation at the domain layer. It makes it much harder for those hoax websites to succeed in promoting new hoaxes."

Brill notes that in using humans to manually rate sources, they are able to avoid the lack of transparency criticisms that platforms often receive for using artificial intelligence and opaque algorithms to identify misinformation.

"That's how you achieve scale is rating the reliability of sites, not individualizing articles," says Crovitz.

4. Go where fake news spreads: According to Hall Jamieson, it's especially important that health care officials spread context in venues where people generally receive misinformation.

It's for that reason she says Anthony Fauci is smart to appear on Sean Hannity's Fox News opinion show, as well as Chris Matthew's Sunday news program.

"If you don't go into same venue where the misinformation originally spread, than you're not likely to reach audience the audience that heard it originally," she says.

5. The 10% rule: Some experts, including Hall Jamieson, say it's better to wait until a piece of misinformation reaches a 10% penetration level amongst the population before it's debunked, otherwise, you risk unintentionally spreading the rumor further before it may ever reach a point where its truly problematic.

Brill and Crovitz push back on this rule, and argue that if it's possible to provide context around misinformation earlier than it reaches that penetration level, you should.

6. Prioritizing misinformation: Hall Jamieson says that in addition to understanding what has the threshold to warrant debunking, health officials, policymakers, news organizations and others need to evaluate how problematic certain forms of misinformation are when determining how much they should invest in providing context.

The recent misinformation around disinfectants being used to stop COVID-19 are a good example of the type of misinformation that warrants immediate context and resources for health officials to debunk, versus misinformation around where the virus came from, for example.

Yes, but: Many of these efforts don't acknowledge the fact that people have become increasingly biased towards information, regardless of its validity, that backs their political viewpoint.

"The big question regarding misinformation as it pertains to coronavirus would be the degree to which it has been politicized," says Joshua Tucker, a professor of politics and co-director of the Center for Social Media and Politics at New York University.

"We have found in our research that people are much less likely to correctly identify false or misleading news as such if it aligns with your own political preferences."

Be smart: To an extent, tech platforms have taken this route as well by removing misinformation that they think could cause real-world health problems.

"Facebook are being more aggressive about actually removing certain types of COVID-related misinformation/disinformation, and not just providing correctives, which I think is a welcome development," says Philip Napoli, a professor at Duke University's Sanford School of Public Policy.

The big picture: When society began to seriously reckon with "fake news" and misinformation after the 2016 election, there were may efforts to impose binary solutions by identifying information as being true or false, and blocking or removing it accordingly. Experts say this is problematic for two reasons:

The backfire effect: Some experts have found that when presented with a binary label, consumers will be incentivized to click something that's labeled "false" simply out of curiosity. Lewandowsky says he was never able to prove that entirely, but has concluded that "if people are presented with explanations affirming facts or refuting myths, belief in facts may be sustained over time." The assumption that everything's been evaluated: "When someone sees something labeled as false, they assume everything else is true. The problem is that a lot of stuff that's not true exists and just hasn't been flagged yet," says Hall Jamieson.

Between the lines: Tech companies have struggled to figure out the best way to flag misinformation without incentivizing people to click further into it.

In 2017, Facebook said it would no longer use "Disputed Flags" — red flags next to fake news articles — to identify fake news for users because it caused more people to click on the debunked posts. Instead, the company now uses "warning labels," and they appear to be working much better. According to the company, only 5% of the time people that were exposed to those labels went on to view the original content.

In January, Twitter began guiding users to authoritative sources using a search prompt to make it easier for users to encounter facts while browsing tweets in their timelines. The company has also expanded its verification policies to make it easier to identify when information is coming from credible sources.

YouTube has developed fact check information panels that offer users context about misinformation as they encounter it through videos on its platform that don't violate its policies for removal.

The bottom line: There's no silver bullet in solving the misinformation crisis surrounding the coronavirus pandemic, but more conclusive research on the topic, and specifically how it pertains to the internet age, can be used as a helpful roadmap moving forward.