China has framed Hong Kong’s pro-democracy protests as a foreign-backed movement that uses “thugs” to threaten the mainland’s sovereignty. The narrative might be working within the great firewall. Outside China? Not so much, notes analyst Echo Huang.

Beijing’s approach has been quite clumsy, experts say, drawing a sharp contrast to Russia’s success in spreading disinformation on Western social media. Rather than blending seamlessly into the online sphere—where Moscow has demonstrated its skill at camouflage—it’s ridiculously easy to identify China’s efforts as bizarre or even downright false, she writes for Quartz.

“China is much more primitive in terms of techniques,” said Haifeng Huang, assistant professor of politics at the University of California’s Merced campus, whose research focuses on opinion shaping in authoritarian settings. “…The goal is different: China is more about self-defense, Russia is more about actively going out, targeting foreign events. China’s goal is to influence Western discourses about Chinese events.”

China likely will continue using foreign social networks for issues beyond Hong Kong. With its growing expertise with artificial intelligence, visible in recent popular apps, “deep fakes” could become part of China’s disinformation arsenal, Huang adds for Quartz. The rise of globally popular apps such as TikTok, one of China’s first to be a big hit overseas, could also play a role. Hong Kong protests have an unusually light presence on TikTok, spurring suspicion of censorship and manipulation.

“China’s investments in AI may lift its capacity to target and manipulate international social media audiences,” the Australia Strategic Policy Institute (ASPI) noted in a report this month analyzing Twitter activity aimed at discrediting the protests..

Members of Congress, former heads of intelligence agencies, and big tech leaders this week gathered at a half-day symposium (above) dedicated to finding ways to stop the burgeoning threat to democratic norms and bridging the gap between policymakers and the experts, notes one account. The event was hosted by FEC Chairwoman Ellen Weintraub, PEN America and the Global Digital Policy Incubator (GDPI) at Stanford’s Cyber Policy Center.

As technological manipulation of information advances and disinformation becomes an increasingly common tactic, the prospect that public trust in democratic processes will further disintegrate looms large, PEN America has observed in the 2019 report Truth on the Ballot: Fraudulent News, the Midterm Elections, and Prospects for 2020 and its 2017 report Faking News: Fraudulent News and the Fight for Truth.

There is insufficient outrage about the threat, especially of disinformation emanating from foreign governments. said GDPI’s executive director Eileen Donahoe, a National Endowment for Democracy board member. Preventing the spread of disinformation must also include media literacy and public awareness, she said.

While participants in the event, explored a range of potential solutions to deal with the disinformation threat, underneath the policy options and proposed government actions was an almost plaintive concern that people aren’t involved enough, Poynter’s Daniel Funke and Susan Benkelman observe.

“The animating energy for us today was our shared sense that there has been an inadequate level of public outrage or official response to the foreign disinformation threat,” said Donahoe. She and other participants said a “society-wide” approach is needed.

It is imperative that artificial intelligence evolve in ways that respect human rights, Donahoe and Megan MacDuffee Metzger, wrote recently for the NED’s Journal of Democracy. Standards found in landmark UN documents can help with the task of making AI serve rather than subjugate human beings, they contend.

While panelists agreed there is no “silver bullet,” PEN adds, there was broad consensus on the need to protect free expression but also ensure voters are equipped with the tools they need to differentiate between what’s real and what’s fake; to train journalists and other stakeholders in how to prevent misinformation from spreading; and how policymakers might consider new ways to work with social media companies, currently one of the largest elements of the U.S. economy without a primary regulator.