In just over three weeks, Donald Trump will be sworn in as the next president. According to the Sierra Club, Trump will be the only world leader who still denies the science behind climate change. Following his election, Donald Trump has nominated a number of climate change deniers for top posts, including Exxon CEO Rex Tillerson for secretary of state, Oklahoma Attorney General Scott Pruitt to head the Environmental Protection Agency, former Texas Governor Rick Perry to head the Energy Department and Congressmember Ryan Zinke to become interior secretary. Now scientists at federal agencies are expressing growing concern that the new administration may attempt to destroy or bury decades of scientific studies on climate change. Senior Trump adviser Bob Walker has already proposed stripping funding of NASA’s climate research, describing it as "politically correct environmental monitoring." In a scramble to protect existing government climate data, campaigns have been launched to copy and preserve decades of government-sponsored climate research. We speak to Laurie Allen, assistant director for digital scholarship at the University of Pennsylvania Libraries and member of the Data Refuge Project.

This is a rush transcript. Copy may not be in its final form.

DONALD TRUMP: All of this with the global warming and the—that—a lot of it’s a hoax. It’s a hoax. I mean, it’s a money-making industry, OK?

BILL O’REILLY: They said that you called climate change a hoax. Is that true?

DONALD TRUMP: Well, I might have. … I believe that climate change is not man-made. … We’re going to cancel the Paris climate agreement. … Our president is worried about global warming. What a ridiculous situation!

NERMEEN SHAIKH: Following his election, Donald Trump has nominated a number of climate change deniers for top posts, including Exxon CEO Rex Tillerson for secretary of state, Oklahoma Attorney General Scott Pruitt to head the Environmental Protection Agency, former Texas Governor Rick Perry to head the Energy Department and Congressman Ryan Zinke to become interior secretary. Now scientists at federal agencies are expressing growing concern that the new administration may attempt to destroy or bury decades of scientific studies on climate change. Senior Trump adviser Bob Walker has already proposed stripping funding of NASA’s climate research, describing it as, quote, "politically correct environmental monitoring."

AMY GOODMAN: In a scramble to protect existing government climate data, campaigns have launched to copy and preserve decades of government-sponsored climate research. A guerrilla archiving event was just held at the University of Toronto in an attempt to save the climate studies on servers outside the United States. Organizers in the U.S. are planning additional events in the coming weeks to archive vulnerable government websites and databases that contain climate research. This comes as the End of Term Web Archive, a project administered by the Internet Archive, gets underway. The project captures and saves U.S. government websites at risk of changing or disappearing altogether at the end of presidential administrations. In the wake of Trump’s election, the Internet Archive has announced it will be moving a copy of its archive to Canada. For more, we’re joined by two guests. Laurie Allen is with us, assistant director for digital scholarship at the University of Pennsylvania Libraries and a member of the Data Refuge Project to rescue climate and environmental data. And Brewster Kahle joins us. He is a computer engineer, internet entrepreneur, activist and digital librarian, the founder of the Internet Archive. We welcome you both to Democracy Now! Laurie Allen, why don’t you start out by explaining what you’re doing to preserve climate change research and why you’re so concerned it might be erased from government websites?

LAURIE ALLEN: Thank you. So, what we’re doing in the Data Refuge effort, it’s a really large collaborative effort, with—including the Internet Archive and, as you mentioned, the folks in Toronto, as well as researchers, scholars, librarians, citizen scientists from many different places, basically creating safe channels for data that is currently stored and made accessible through federal websites and through the federal government to move to new locations so that it can—we can continue to ensure access to these facts for research. It’s also an effort to raise awareness of the value of this data and of how data is preserved and shared today. So, basically, what we’re doing is holding events. There will be one in Philadelphia January 13th and 14th, where we’ll use protocols that are appropriate to the data that we’re trying to save. So, for as much as possible, we’ll move to the Internet Archive through the End of Term harvest project. And then, for other kinds of data, we’ll move to trusted repositories here in the U.S. and around the world to make sure that we continue to provide access to them.

NERMEEN SHAIKH: Laurie Allen, could you say a little bit more about the kind of data you’re looking to archive? The main concern is not documents, but rather the analytic software that may become obsolete with disuse. Could you explain the significance of such software to climate research?"’

LAURIE ALLEN: Absolutely. I think, as you—as we’ve talked to scientists and researchers who rely on federal climate data, so many of them use a variety of sources. So, there are—one of the big challenges is in figuring out ways to preserve—continue to provide access to the data themselves, but also the software used to analyze those data. The scientists rely on data in multiple forms, and so we’re—we’re identifying protocols that we hope will be appropriate to each—you know, whether it’s the software used to create derivative versions of the data that are more useful for various purposes or to, if possible, save the software themselves—or, the software itself. You know, this is—it’s a really sticky problem. And to the extent, I think, that we can work with the Internet Archive, we are. But as you mentioned, there are data that just can’t be scraped, that can’t be copied, using web archiving. And so, for those, it’s basically going to have to be a case-by-case basis, which is why we’re—so many of the events are engaging with developers and software engineers, in collaboration with scientists and librarians and archivists, to sort of, you know, identify those materials that are most vulnerable, that are most valuable, and take them one at a time and figure out how we can continue to provide access to them.