1. What is disinformation?

It’s often defined as false content spread with the specific intent to deceive, mislead or manipulate. (That’s different from misinformation, which is erroneous but spread unintentionally.) Disinformation can take the form of legitimate-looking news stories, tweets, Facebook or Instagram posts, advertisements and edited recordings distributed on social media or by messaging app. A new worry is what are called deepfakes: video or audio clips in which computers can literally put words in someone’s mouth.

AD

AD

2. What’s different in the internet age?

Barriers to mass communication are lower. With platforms such as Facebook and Twitter, modern-day purveyors of disinformation need only a computer or smartphone and an internet connection to reach a potentially huge audience -- openly, anonymously or disguised as someone or something else, such as a genuine grassroots movement. In addition, armies of people, known as trolls, and so-called internet bots -- software that performs automated tasks quickly -- can be deployed to drive large-scale disinformation campaigns. A quarter of all tweets about climate on an average day are produced by bots that promote “denialism” of the science behind global warming, according to research by a Brown University professor cited by The Guardian.

3. What’s the harm?

AD

If the global reach of social media were being used merely to spread messages of peace and harmony -- or just to make money -- maybe there wouldn’t be any. But the purposes are often darker. In what’s known as state-sponsored trolling, for instance, governments create digital hate mobs to smear critical activists or journalists, suppress dissent, undermine political opponents, spread lies and control public opinion.

AD

4. Who produces disinformation?

Researchers at the University of Oxford this year found evidence of “social media manipulation campaigns” by governments or political parties in 70 countries, up from 28 countries in 2017, with Facebook being the top venue where the disinformation is disseminated. Discussion of government-directed campaigns usually starts with Russia. But the Oxford report singles out China as having become “a major player in the global disinformation order.” Along with those two countries, five others -- India, Iran, Pakistan, Saudi Arabia and Venezuela -- have used Facebook and Twitter “to influence global audiences,” according to the Oxford report.

AD

5. What has China done?

Twitter and Facebook, in August 2019, revealed a Chinese state-backed information operation launched globally to de-legitimize the pro-democracy movement in Hong Kong. Twitter said it had taken down 936 accounts that were “deliberately and specifically attempting to sow political discord in Hong Kong.” Facebook said it had found a similar Chinese government-backed operation and deleted fake accounts. It said it doesn’t want its services “to be used to manipulate people.”

AD

6. What has Russia done?

A Rand Corp. study of the conflict in eastern Ukraine, which has claimed some 13,000 lives since 2014, found the Russian government under President Vladimir Putin ran a sophisticated social media campaign that included fake news, Twitter bots, unattributed comments on web pages and made-up hashtag campaigns to “mobilize support, spread disinformation and hatred and try to destabilize the situation.” Another Russian effort targeted the 2016 U.S. presidential election, reaching millions of American voters with phony posts and ads that sought to exploit divisions on hot-button issues.

AD

7. Where else is this a problem?

Some examples:

• Before India’s 2019 elections, shadowy marketing groups connected to politicians used the WhatsApp messaging service to spread doctored stories and videos to denigrate opponents. The country also has been plagued with deadly violence spurred by rumors that spread via WhatsApp groups.

AD

• A study of 100,000 political images shared on WhatsApp in Brazil in the run-up to its 2018 election found that more than half contained misleading or flatly false information; It’s unclear who was behind them.

• In countries such as Sri Lanka and Malaysia, fake news on Facebook has become a battleground between Buddhists and Muslims. In one instance in Sri Lanka, posts falsely alleging that Muslim shopkeepers were putting sterilization pills in food served to Buddhist customers led to a violent outburst in which a man was burned to death.

AD

• In Myanmar, a study commissioned by Facebook blamed military officials for using fake news to whip up popular sentiment against the Rohingya minority, helping to set the stage for what UN officials have described as genocide.

8. How does digital disinformation work?

AD

A blatant falsehood might spring up on something that resembles a legitimate news website -- with names such as newsexaminer.net or WorldPoliticus.com -- and go viral when it’s tweeted by someone with lots of followers or turned into a “trending” YouTube video. The most sophisticated disinformation operations use troll farms, artificial intelligence and internet bots -- what the Oxford researchers call “cyber troops” -- to flood the zone with social-media posts or messages to make a fake or doctored story appear authentic and consequential. Fake news can be a complete fabrication (the pope didn’t really endorse Donald Trump), but often there’s a kernel of truth that’s taken out of context or edited to change its meaning.

AD

9. How are social-media companies responding?

Under pressure from lawmakers and regulators, Facebook and Google (a unit of Alphabet Inc.) have started requiring political ads in the U.S. and Europe to disclose who is behind them. Google also said it would no longer allow election ads to be targeted based on political affiliation on platforms including Google Search and its YouTube division. Twitter announced a ban on all political advertising, days after Mark Zuckerberg chief executive officer at Facebook, defended his company’s policy to not fact-check ads from politicians. YouTube adjusted its “up next” algorithms to limit recommendations for suspected fake or inflammatory videos, a move it had resisted for years. WhatsApp now limits, to five, how many people or groups a message can be forwarded to at once. Its parent company, Facebook, said it spent 18 months preparing for India’s 2019 election, blocking and removing fake accounts and partnering with outside fact-checkers (albeit relatively few). Facebook says it has developed artificial intelligence tools to help identify content that’s abusive or otherwise violates the site’s policies. In the wake of the March 15 shooting massacre in Christchurch, New Zealand, Facebook, Google and Twitter signed a voluntary agreement with world leaders pledging to fight hate speech online.

AD

10. What are governments doing?

AD

A Singapore law that took effect Oct. 2 allows for criminal penalties of up to 10 years in prison and a fine of up to S$1 million ($720,000) for anyone convicted of disseminating “falsehoods that affect the public interest,” which the government itself identifies. In November Facebook added a disclaimer to a post for the first time, after receiving an official “correction direction.” Malaysia enacted a similar law in 2018 that critics said was aimed at curtailing dissent. A new government repealed it in October. France has a new law that allows judges to determine what is fake news and order its removal during election campaigns. Indonesia set up a 24-hour “war room” ahead of its 2019 elections to fight hoaxes and fake news. In the U.S., the Defense Department has launched a research project to unearth and repel “large-scale, automated disinformation attacks.” But legislative efforts to crack down on disinformation can run up against the guarantee of free speech in the Constitution. Then there’s the Philippines, where the government of President Rodrigo Duterte encourages “patriotic trolling” to undermine his critics.

--With assistance from Marie Mawad and Yudith Ho.

To contact the reporter on this story: Shelly Banjo in Hong Kong at sbanjo@bloomberg.net

To contact the editors responsible for this story: Peter Elstrom at pelstrom@bloomberg.net, Paul Geitner, Laurence Arnold