Did Jon Stewart’s departure from the Daily Show help elect Donald Trump? According to a paper published in Electoral Studies in April: maybe!

That one of the largest upsets in political history could be blamed on the retirement of a late-night host—or even that anyone believed it could be blamed on that—is an alluring story. A few places, including Fast Company and the New York Post, wrote up the study. It made for a fun headline. Too bad it was wrong.

The background: For their study, the authors collected Daily Show ratings across the U.S. as well as county-level data on voter turnout in 2012 and 2016. They surmised that the change in hosts from Stewart to Trevor Noah led to a rating dip, and to Trump earning an average of 1 percent more of the votes in each county compared with if Stewart had stayed on, possibly because Stewart motivated Democrats to show up to the polls.

But shortly after the paper went up online, a few academics began tweeting their skepticism of the authors’ findings. One emailed them some technical concerns. The authors checked their work and found that they’d made a computational error, one that altered the conclusion of the paper. Stewart’s departure, and the corresponding Daily Show ratings dip, didn’t affect voter turnout of would-be Hillary voters after all. The authors have asked the journal to withdraw the paper, reported Adam Marcus at Retraction Watch (disclosure: my former employer).

“When you see a claim that seems too big to be true,” Andrew Gelman, a statistician at Columbia, wrote of the paper, “maybe it’s just mistaken in some way.”

But this isn’t a case of bad science so much as it’s a reality of the scientific process itself. The paper had been through peer review, a process that science journalist Ryan F. Mandelbaum described Friday as “like a restaurant telling you that the food is cooked—it might still be awful or give you food poisoning.” (Mandelbaum was reporting on another tall claim that seems to have been taken back: that a researcher in the U.K. had decoded the infamously tricky Voynich manuscript). In this metaphor, researchers rushing to double-check, and then correct, their own work in the face of peer criticism—which is what happened in this case—are like cooks rushing out to your table midmeal, apologizing, and handing you a Pepto-Bismol themselves. (A paper that’s faulty due to fraud might involve a cook sprinkling in a laxative, and then fleeing the scene.)

The authors, who say they plan to publish a corrected version of the Stewart paper, got a lot of kudos and hand-clap emoji after they tweeted their decision to retract. “*Everyone* has made errors like this but the ones that survive past publication usually stay uncorrected,” wrote Alex Coppock, a political scientist at Yale. In a world where people make mistakes, retractions like this one are healthy and probably too rare. Indeed, As Jeffrey Brainard and Jia You wrote in Science magazine last year, a current rise in retracted papers seems “to reflect not so much an epidemic of fraud as a community trying to police itself.”

So does this mean we can’t trust … peer-reviewed research papers? Not quite. It’s just a reminder that any given paper is an early draft of an idea, something that should still be examined, questioned, and built upon (and also a good reminder to plug papers into Google—publishing can move at a glacial pace, and the journal has yet to append a notice to the paper). That’s especially important to remember when it comes to studies that appear to suggest a neat cause for a freak event. Even if the paper were correct, the findings would represent only one small electoral force among many—a factor, not a smoking gun. The authors warned as much in their original paper: “Our results should be approached with considerable caution.” Let that be our moment of Zen.