This article is part of the Free Speech Project , a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech.

“REMINDER, BIDEN AND BLOOMBERG VOTERS, #SUPERTUESDAY PRIMARIES HAVE BEEN DELAYED TO WEDNESDAY DUE TO CORONAVIRUS CONCERNS. PLEASE RETWEET TO SPREAD THE WORD. #TEAMBIDEN #TEAMBLOOMBERG” read one of the misleading (and since deleted) tweets screenshotted and reported on by the Daily Beast on Tuesday. It was just one example out of several in which Twitter users, whether seriously or jokingly, tried to take advantage of peoples’ fears about the spread of the novel coronavirus to influence voter behavior with references to voters’ age (“As the virus disproportionately impacts the elderly, Biden supporters are urged to remain indoors”) and rescheduled primaries.

Even before Super Tuesday, there was lots of false and misleading information about the spread of COVID-19, the appropriate precautionary measures, and its economic impacts on social media. At times, it’s almost (almost!) possible to forget we’re in the middle of a presidential election.

But there are certain points of overlap between the election and the virus, as in the cases of purported Bernie Sanders supporters tweeting out fake updates about the primaries, and coordinated Russian efforts to interfere with the elections. While Russia leverages COVID-19 for online disinformation, China has also been using the internet to control the story to its own advantage, through rabid censorship of keywords related to the virus on Chinese social media and messaging platforms. These different approaches to manipulating information about the coronavirus are typical of the two countries’ past uses of online infrastructure to control content both within and beyond their own borders. They speak to the myriad, interconnected risks associated with the virus beyond the immediate medical ones.

In late February, State Department officials accused Russia of using fake profiles on Twitter, Facebook, and Instagram to spread false rumors about the coronavirus—for instance, that the CIA had developed the virus as a biological weapon. “By spreading disinformation about coronavirus, Russian malign actors are once again choosing to threaten public safety by distracting from the global health response,” Philip Reeker, acting assistant secretary of state for Europe and Eurasia, told the Guardian.

Meanwhile, Kevin Collier at NBC reported that the U.S. government and Facebook were preparing for the possibility that coronavirus conspiracy theories or other false information about the disease’s spread might be leveraged to dissuade voters from voting in the 2020 elections. Facebook has already taken steps to try to monitor inaccurate content and exploitative ads related to the virus on its platform, including showing “educational pop-ups” to everyone who searches for information about the coronavirus on Facebook and blocking ads that take advantage of people, noting that “ads for face masks that imply they are the only ones still available or claim that they are guaranteed to prevent the virus from spreading will not be allowed to run on our platforms.”

While Facebook is trying to root out and remove misinformation about false cures or treatment options (for instance, that “drinking bleach cures the coronavirus”), China’s social media and messaging platforms have been censoring huge swaths of coronavirus-related content for months, according to a report released this week by researchers at the University of Toronto Citizen Lab titled “Censored Contagion: How Information on the Coronavirus Is Managed on Chinese Social Media.”

The researchers analyzed two Chinese platforms to assess the extent of coronavirus-related censorship in China: YY, a livestreaming platform, and the TenCent-owned messaging service WeChat. Both platforms censored content related to the virus through a set of keywords. In the case of YY those keywords were stored within the app itself, so the researchers were able to reverse engineer the app and find the complete list, which included Chinese characters for keywords such as “Unknown Wuhan Pneumonia” and “Wuhan Seafood Market” beginning as early as Dec. 31, weeks before the Chinese government disclosed the full extent of its knowledge of the disease to the public, as the report points out.

WeChat, by contrast, conducts censorship remotely, through servers owned by the company, which makes its censorship keywords much trickier to track. To figure them out, the researchers pulled content from articles listed on the front pages of a set of 31 news sources based in China, Taiwan, and Hong Kong, including People’s Daily, the South China Morning Post, China National Radio, and China Central Television. The researchers then sent the headlines and text extracted from those articles to a WeChat group chat comprised of three different test accounts: one registered to a phone number in mainland China and two registered to Canadian phone numbers.

It’s old news by now that China engages in extensive online censorship, yet it’s still rather remarkable to see the Citizen Lab results laid out side by side: the messages received by the Canadian accounts and blocked for the Chinese account in the very same group chat, about, for instance, CDC recommendations that people wash their hands. By repeating these scripted chats over and over, the researchers were able to identify hundreds of keyword combinations blocked by WeChat that they organized into seven different categories. The first category consists of keywords related to Chinese central leadership, including Chinese phrases for “Xi Jinping goes to Wuhan” and the combination of such sets of words as “Epidemic + Pneumonia + Xi Jinping + Central” or “Epidemic + Chairman Xi + Unity.” The keyword combinations trigger censorship when every word in a set is used at some point in a message.

Another group of keyword combinations was associated with Chinese government actors and policies. These included “Muckraking Wuhan Virus Lab + Successful history of lab director,” “Wuhan + CCP + Crisis + Beijing,” “Wuhan + Obviously + Virus + Human-to-human transmission,” “US Centers for Disease Control + Coronavirus,” and “Online teaching + Strongly + Promote.” These were some of the most striking combinations because of how strongly they seem to imply that even accurate information about the transmission and prevention of the coronavirus was being aggressively blocked by Chinese platforms. This was even more true of the category of keywords that the researchers designated as being linked to “factual information and discussions of COVID-19,” which included such combinations as “Pneumonia + Disease control and prevention + Virus + Medical journal” and “Relevant + Disease control + Travel ban + Virus.”

The report authors noted, “Because of social media’s integral role in Chinese society and its uptake by the Chinese medical community, systematic blocking of general communication on social media related to disease information and prevention risks substantially harming the ability of the public to share information that may be essential to their health and safety.”

Other keyword combinations pertained to the spread of the virus in Hong Kong, Macau, and Taiwan (e.g., “Masks + Taiwan + Export + Country”), “speculative content” about the disease (such as “Death case + Pneumonia + Death toll” or “Poisonous City + Wuhan”), the deceased Dr. Li Wenliang who raised early concerns about the virus (e.g., “Epidemic + Virus + Li Wenliang + Central government”), and collective action “calls for petitions and public mobilization” (such as “Wuhan + Liberate”).

Both the Russian and Chinese approaches to online coronavirus content are worrying in their own ways. True to form, Russia’s approach to online manipulation seems more focused on influencing behavior and causing panic outside its own borders, while China’s strict censorship of coronavirus-related content appears to be largely internally directed and aimed at preventing panic. That’s not to say China is not interested in influencing outside opinions—it has even taken a page from Russia’s playbook in the past year, deliberately seeking to spread disinformation about the Hong Kong protests. But when it comes to the coronavirus, China’s online efforts have been primarily inward-facing, perhaps because of how many of the cases have occurred in China and how significant its own domestic crisis is at the moment. Russia and China have long had very different approaches to controlling and creating online content in order to manipulate particular audiences, even as Russia seems to be trying to crack down on domestic internet control and China may be expanding its capabilities to include disinformation.

For the moment, however, each country still has very different strengths when it comes to manipulating information flows, as their efforts around the coronavirus demonstrate. Unfortunately for people trying to combat those efforts, each requires a very different set of technical tools and mitigations to respond to effectively. But keeping track of what each country is doing is an important first step in thinking through how best to respond and ensure the free flow of accurate and helpful information about the virus worldwide.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.