A new analysis of 6.5 million tweets from the days before and after U.S. President Donald Trump announced his intention to ditch the Paris agreement in June 2017 suggests that automated Twitter bots are substantially contributing to the spread of online misinformation about the climate crisis.

Brown University researchers "found that bots tended to applaud the president for his actions and spread misinformation about the science," according to the Guardian, which first reported on the draft study Friday. "Bots are a type of software that can be directed to autonomously tweet, retweet, like, or direct message on Twitter, under the guise of a human-fronted account."

As the Guardian summarized:

On an average day during the period studied, 25% of all tweets about the climate crisis came from bots. This proportion was higher in certain topics—bots were responsible for 38% of tweets about "fake science" and 28% of all tweets about the petroleum giant Exxon. Conversely, tweets that could be categorized as online activism to support action on the climate crisis featured very few bots, at about 5% prevalence. The findings "suggest that bots are not just prevalent, but disproportionately so in topics that were supportive of Trump's announcement or skeptical of climate science and action," the analysis states.

More broadly, the study adds, "these findings suggest a substantial impact of mechanized bots in amplifying denialist messages about climate change, including support for Trump's withdrawal from the Paris agreement."

Pro-Trump, anti-environment bots are dominating a significant part of the climate discussion on Twitter. (A better approach to anonymity would help change this.)

https://t.co/Wugn7rZpZ9 pic.twitter.com/Yewlk7cUB5 — Andrew Stroehlein (@astroehlein) February 21, 2020

Thomas Marlow, the Brown Ph.D. candidate who led the study, told the Guardian that his team decided to conduct the research because they were "always kind of wondering why there's persistent levels of denial about something that the science is more or less settled on." Marlow expressed surprise that a full quarter of climate-related tweets were from bots. "I was like, 'Wow that seems really high,'" he said.

In response to the Guardian report, some climate action advocacy groups reassured followers that their tweets are written by humans:

This tweet has been written by a human, but a QUARTER of all tweets about the #ClimateCrisis are produced by bots, according to a new study. The result? A distortion of the online conversation to "include far more climate science denialism..."https://t.co/ZvRBVceuZa — Friends of the Earth (@friends_earth) February 21, 2020 25% of tweets about #climatechange are made by bots - study by @BrownUniversity. This can distort the public debate on this issue, especially as bots often spread #climatedenialism@EWGnetwork can proudly promise all our tweets are human-made.https://t.co/Ff25zbu18l — Energy Watch Group (@EWGnetwork) February 21, 2020

SCROLL TO CONTINUE WITH CONTENT Never Miss a Beat. Get our best delivered to your inbox.







Other climate organizations that shared the Guardian's report on Twitter weren't surprised by the results of the new research:

imagine our shock#ExxonKnew pic.twitter.com/7yruK7ZEpn — Greenpeace EU (@GreenpeaceEU) February 21, 2020 #ClimateCrisis denial is largely BOTS! In news that won't surprise anyone with Twitter, fake accounts are driving #climatedenial. Another tool used by the mega rich to stop us coming together to drive change for a fair, safe future. #ClimateChangehttps://t.co/ECBtYN5AMy — Extinction Rebellion Guildford (@XRGuildford) February 21, 2020

"The Brown University study wasn't able to identify any individuals or groups behind the battalion of Twitter bots, nor ascertain the level of influence they have had around the often fraught climate debate," the Guardian noted. "However, a number of suspected bots that have consistently disparaged climate science and activists have large numbers of followers on Twitter."

Cognitive scientist John Cook, who has studied online climate misinformation, told the Guardian that bots are "dangerous and potentially influential" because previous research has shown "not just that misinformation is convincing to people but that just the mere existence of misinformation in social networks can cause people to trust accurate information less or disengage from the facts."

As Cook, a research assistant professor at the Center for Climate Change Communication at George Mason University, put it: "This is one of the most insidious and dangerous elements of misinformation spread by bots."

Naomi Oreskes is a Harvard University professor and science historian who also has studied climate misinformation, including an October 2019 report (pdf) co-authored by Cook about the fossil fuel industry's decades of efforts to mislead the American public. In a tweet Friday, Oreskes called the new research "important work" but added "I wish they'd published it before going to the media."

This is a draft, unpublished study so take with but it is an amazing finding on the role of automation to amplify messages on climate on social media. Bots are used mostly in tweets critical of action or supportive of Trump -->> https://t.co/4PKCocn4dN pic.twitter.com/DZdqhBqXum — Ketan Joshi (@KetanJ0) February 21, 2020

The Guardian report on Marlow and his colleagues' analysis came just a few months after the Trump administration formally began the one-year process of withdrawing from the Paris accord, which critics said sent "a signal to the world that there will be no leadership from the U.S. federal government on the climate crisis—a catastrophic message in a moment of great urgency."

The findings also came about a month after the Bulletin of Atomic Scientists issued a historic warning about the risk of global catastrophe by setting the Doomsday Clock at 100 seconds to midnight. The bulletin warned in its statement announcing the clock's new time that "humanity continues to face two simultaneous existential dangers—nuclear war and climate change—that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society's ability to respond."

"Focused attention is needed to prevent information technology from undermining public trust in political institutions, in the media, and in the existence of objective reality itself," the bulletin added. "Cyber-enabled information warfare is a threat to the common good. Deception campaigns—and leaders intent on blurring the line between fact and politically motivated fantasy—are a profound threat to effective democracies, reducing their ability to address nuclear weapons, climate change, and other existential dangers."