Corrections also cannot tackle deep-seated biases and beliefs that enabled the falsehood to thrive in the first place, says NTU Wee Kim Wee School of Communication and Information’s Associate Professor Edson C Tandoc Jr.

SINGAPORE: The bill proposed on Monday (Apr 1) to tackle the spread of online falsehoods in Singapore includes a comprehensive set of measures that emphasises the importance of correcting a “false statement of fact”.



It seeks to compel, among others, publishers, internet service providers, social media platforms and individuals who have shared the false statement to put up a correction, rather than remove the post, though that option is also available.





The bill also provides for the possibility of requiring the publication of a correction not only online but also “in a specified newspaper or other printed publication of Singapore”.

This is a clear recognition of the importance of debunking disinformation and assumes that corrections are the most effective in combating falsehoods.



However, research shows this is not always the case.





CORRECTIONS MIGHT REINFORCE DEEP-SEATED BIASES



First, studies have found that corrections only work on some groups of people, such as those with higher cognitive abilities. For others, corrections might backfire.



In some cases, individuals exposed to corrections just dismiss them outright and hold onto their pre-existing misperceptions. In other cases, individuals might believe in the correction, but ultimately regress back to their prior beliefs and attitudes.



For example, news organisations and fact-checking organisations in the United States have repeatedly debunked fake news that link vaccines to autism, and yet some individuals still object to vaccinations, which have contributed to the ongoing measles outbreak in the United States and other countries.



It is possible some misperceptions fit some individuals’ deep-seated biases, so they believe in falsehoods consistent with their belief systems despite corrections.



Confirmation bias also makes individuals select information that support, rather than contradict, their existing beliefs. For example, those who strongly believe that climate change is not real would point to freezing temperatures in winter as evidence, but would keep quiet about severe droughts or heat waves.



Worse, to protect their own biases, some individuals also label sources of corrections as biased sources and question corrective information, a risk that might be heightened if platforms and intermediaries are compelled to run corrections.

Dealing with falsehoods that prey on a certain psyche is an uphill climb when corrections run the risk of repeating the false claim they try to debunk.



A 2018 US study found overriding adverse effects of message repetition that corrections cannot unseat. The scope and impact of repetition of falsehoods on people’s beliefs are so huge that individuals repeatedly exposed to a piece of fake news are less likely to believe in a third-party fact-check of that fake news story.



Compelling the publication of a correction is therefore no silver bullet.



Members of the Select Committee on deliberate online falsehoods: (From left) Mr K Shanmugam, Mr Charles Chong, Dr Janil Puthucheary and Mr Pritam Singh. (Photo: Hanidah Amin)

What certainly helps is where intervention can tackle a falsehood at scale. The bill proposes asking website owners and service providers to ensure corrections reach all those exposed to a piece of falsehood and are included in copies where the false statement is carried. This is a useful intervention, as studies have found that current fact checks published in a fresh post or news report rarely reach their target audiences.



But the bill also refers to publishing corrections even in traditional news outlets, which will increase the reach of the correction — as well as the falsehood it tries to correct.



Traditional outlets have amplification effects. Corrections, then, might also become another way for the original disinformation to reach a wider audience, which brings me to my second point about how effective a correction can be depends on how the correction is crafted.



CORRECTIONS MUST BE CRAFTED CAREFULLY



Second, the format and language of the correction matters. A US study found that video-based corrections are more effective than print-based corrections.



A final year project by a group of students I supervise at the Wee Kim Wee School of Communication and Information at Nanyang Technological University Singapore also found that less ambiguous corrections work better.



Ambiguity in corrections is reduced by providing detailed but easy-to-understand accounts, such as in the form of infographics. Corrections should not just tag something as false, like what many fact checks shared on Twitter do. Instead, they should refute the false claims by explaining why these claims are false.



Another study involving over 1,000 participants published in the British Journal of Psychology in 2019 found that detailed refutations which provide the alternative, factual account have a more sustained effect on reducing belief in false claims than retractions or merely labelling a claim as false.



So it’s worth rethinking the approach of a simple one-line correction that states that the subject of a statement is false.



CORRECTIONS CANNOT UNSEAT DEEP-SEATED BIAS



Third, issuing a correction is a reactive measure that might work only in the short term.

Online falsehoods appeal to deep-seated biases and fears. A correction to a false account that the HPV vaccine causes autism, for example, might quickly reduce belief in this specific fake news article, but another false account might surface weeks after that makes a similar claim on the same broad subject, for instance, in claiming that the measles vaccine causes autism.



Photo illustration of a man typing on a laptop keyboard. (File photo: Gaya Chandramohan)

So the impact of corrections still largely depends on the ability of media users to distinguish between credible and questionable information on a routine basis, as and when they arise in different contexts.



Equipping media users with the literacy and motivation to be critical of the information they see online is a better long-term solution.



LOGISTICAL CHALLENGE



Fourth, issuing correction and stop directions also presents a logistical challenge and has the potential to expose authorities to criticisms and questions.



The bill targets operations of online bots, which have played a significant role in the spread of falsehoods online. This is an important measure, and social media companies have started to crack down on these inauthentic accounts, which exhibit behaviours that can be automatically distinguished from human users.



But compelling social media companies to issue corrections to online falsehoods has limitations, because falsehoods can also travel over private messages exchanged within their platforms.



There is no stopping users from cutting and pasting a fake statement, and transmitting through private direct messages. This presents a challenge to social media services which have private messaging apps, like Facebook, as well as direct messaging platforms, such as WhatsApp, with end-to-end encryption that ensures message privacy.



With the volume and range of falsehoods that originate within and outside Singapore, from conspiracy theories, sensational health-oriented disinformation, to political propaganda, selecting which ones to require corrections to, based on public interest, also needs a consistent and clear set of guidelines.



The WhatsApp app logo on a smartphone. (File photo: Reuters)

While having a system to mandate corrections when needed will be helpful, when so much news and information also arise from foreign sources, what happens to falsehoods left not corrected?



This is an important question to tackle to set expectations. If citizens mistakenly think all falsehoods will be corrected for them, they might become more vulnerable to falsehoods that stand uncorrected.



Still, the last thing we want is for Singaporeans to exclusively rely on the Government to tell them what is true or not. While we want social media users to be more cautious in sharing information online and limit the risk of sharing falsehoods, we also do not want to limit the range of discourses online.



EMPOWERMENT OF A THINKING SOCIETY IS STILL KEY



Thus, while corrections can help in tackling the spread of falsehoods, more research is required to determine the best format, style, and dissemination strategies to make them effective and avoid any backfire effects.



Indeed, governments around the world have the responsibility to protect their citizens from the harms of online falsehoods.



This should be carried out, however, not by taking the responsibility away from citizens, but by empowering them with information and media literacy skills to deal with online falsehoods.



Associate Professor Edson C Tandoc Jr is the PhD and Masters by Research programme director at the Wee Kim Wee School of Communication and Information at NTU. His research focuses on online news production and consumption. He is also involved in several projects analysing the problem of fake news.