“Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.”

—Samuel Beckett



I wrote this after being reminded, by English novelist Marina Lewycka, of this quote from one of Samuel Beckett’s lesser known, later short stories. Since starting it I have learned that the quote has become a staple of self-help and business books, headlined by one of the ubiquitous Timothy Ferriss manuals on how to be fabulous in no time at all with little or no effort. Then I found that, thanks to an article in Slate magazine, it has become the darling phrase of Silicon Valley and the so-called entrepreneurial set. My first thought was to accept having been scooped and jettison the chapter. But then I read the other pieces, mostly essays, out there that use this quote and realized that it was actually the perfect opportunity to illustrate how what virtually everyone else means by failure is different from what it means in science. And what better co-conspirator than Samuel Beckett.

The terse lines are generally taken to be a literary version of another of those platitudes on failure. The old “try, try again ...” trope. But of course Beckett was rarely so simple. In one of my favorite literary descriptions, Brooks Atkinson in a New York Times review of Waiting for Godot, famously called the play “a mystery wrapped in an enigma.” Not a bad overall description of Beckett.

Also in Philosophy Why Science Should Stay Clear of Metaphysics By Peter Byrne Philosophers of science are not known for agreeing with each other—contrariness is part of the job description. But for thousands of years, from Aristotle to Thomas Kuhn, those who study what science is have roughly categorized themselves into two basic...READ MORE

Being unable to top that I will forgo, to your relief I’m sure, a critical interpretation of Beckett here. But there is something especially penetrating about that quote that is worth a few moments of our time to explore. Beckett offers an idea about failure that is not at all common, but that is very close to what I think is the scientific sense of the word.

Failing better means looking beyond the obvious, beyond what you know and beyond what you know how to do.

The statement is typically succinct (12 words in 6 sentences!), but seemingly trivial. Perhaps an autobiographical life lesson about failing. It could be, except for its terseness, the first lines of a self-help book. Yes, I’ve tried, and yes, I’ve failed, but that will not stop me! I’ll try again even if I fail again.



But then, suddenly there is that last two-word sentence. Fail better. Fail ... better? Now what could that mean? How do you improve on failing? Is there a better way to fail? Is there a worse way to fail? Isn’t failure just failure, and what’s important is how you treat it, bounce back from it, overcome it? Beckett is trying again, not in order to succeed but to fail better.

Failing to write a popular novel—which he certainly had the ability to do, failing to repeat what had made him famous, failing just to try again without trying to fail; these options were not for Beckett. Failing better meant eschewing success when, or because, he already knew how to achieve it. Failing better meant leaving the circle of what he knows. Failing better meant discovering his ignorance, where his mysteries still reside. Try again, of course. But not to succeed. Try again, To Fail Better.

Photo by Louis MONIER/Gamma-Rapho / Getty Images

It is this unordinary meaning of failure that I suggest scientists should embrace. One must try to fail because it is the only strategy to avoid repeating the obvious. Failing better means looking beyond the obvious, beyond what you know and beyond what you know how to do. Failing better happens when we ask questions, when we doubt results, when we allow ourselves to be immersed in uncertainty.

Too often you fail until you succeed, and then you are expected to stop failing. Once you have succeeded you supposedly know something that helps you to avoid further failure. But that is not the way of science. Success can lead only to more failure. The success, when it comes, has to be tested rigorously and then it has to be considered for what it doesn’t tell us, not just what it does tell us. It has to be used to get to the next stop in our ignorance—it has to be challenged until it fails, challenged so that it fails. This is a different kind of failure from that of business or even technology. There, it’s “Make a mistake or two, sure (especially if it’s on someone else’s dime), because you can learn from those mistakes—but then that’s enough of failure.” Fail big and fail fast, the tech guys say. As if it were just something to get out of the way as quickly as possible. Movie executive Michael Eisner said in a 1996 speech, “Failing is good as long as it doesn’t become a habit.” Once successful, there should be no backsliding. But failure is not backsliding in science—it moves things forward as surely as success does. And it should never be done with. It should become a habit.

The problem is that we have stories only about the failures that eventually led to a success.

By trying to fail better, Beckett enlarges his sphere rather than shrinks it. It’s nearly, but not exactly, the opposite of the process of trying to succeed, which is not necessarily succeeding, as trying to fail is not necessarily failing. Trying to succeed entails sharpening a technique, honing a strategy, narrowing in on the problem, focusing your attention on the solution. At times, of course, none of these is a bad thing. Indeed, in the day-to-day work of science this is the recipe for accomplishment—if by that you mean publishing papers and getting grants. There are many scientists who would say that is what science is about: that our job is to put pieces in a puzzle, and the more pieces you add, the more successful you are. It’s hard to argue against this very pragmatic approach, which seems to be “successful” in the sense we discussed earlier.



Except to note that this process is driving science into a corner, separating it from the wider culture, failing to engage generations of students, turning it into a giant maw for facts and balkanizing the effort into smaller and smaller specialties, none of which have any idea what the others are about. We all recognize that there is something wrong with this. We can’t keep up with the exponentially expanding literature of ever narrower details, we can’t agree on what the right spending priorities are, we can’t seem to affect public policy with our knowledge. We—scientists—are more and more a secret society of oddballs and geeks, tolerated because now and again some gadget or cure drops out of the otherwise impenetrable machinery we are supposed to be controlling. And as long as the rate at which that happens is sufficient to satisfy the tax-paying public, then they’ll keep supporting “whatever it is you guys do.” This process may be successful in some narrow sense of the word, but it is doomed to run out of steam, or at least to bore us all to death.

The alternative? Fail better. But how does one do this? Not easily, as Beckett reminds us. Try writing a grant proposal in which you promise to “fail better.” Try getting a job with a research strategy that lays out your program for failing better. Try attracting students to come to your lab where you promise them every opportunity to fail better.

I know how crazy that sounds, but it is of course exactly the right way to proceed. If you are reviewing a grant, you should be interested in how it will fail—usefully or just by not succeeding. Not succeeding is not the same as failing. Not in science. Thomas Edison labeling his failures to perfect the light bulb as 10,000 ways of not succeeding is the right thinking for technology and invention. And it’s not a bad mantra for the Silicon Valley folks, because at least it tells them to be patient and put up with not succeeding for a while. But it’s not the same as failing better.

The right question to ask a candidate for a faculty position who has just presented his or her five-year research plan is, what is the percentage of this that is likely to fail? It should be more than half—way more than half, in my opinion. Because otherwise it is too pat, too simplistic, not adventurous enough—especially for a young scientist. And, really, a five-year plan that anyone should believe, especially the person presenting it? Who among us could predict anything five years into the future? What kind of science would science be if it could make reliable predictions about stuff five years out? Science is about what we don’t know yet and how we’re going to get to know it. And no one knows what that is. We often don’t yet know what we don’t know. And that deep ignorance, the unknown unknowns, will only be revealed by failures. Experiments that were meant to resolve this or that question and fail to do so show us that we needed a better question. So what I want to know from a young scientist is, how are you going to set up your failures?

Failure is not something to tolerate while focusing on the bright side. Failure is not a temporary condition. It must be embraced and worked on with all the diligence that one is accustomed to putting into succeeding. Failing can be done poorly or it can be done well. You can improve your failing! Fail better.

How do you do this? Of course it would be silly if I had a prescription for failing, any more than if I had a surefire prescription for succeeding, since the whole idea is that there is no single way. That said, I can make a few personal recommendations, just to think about. First, I recognize that failing better is not easy in the present culture. Perhaps for this moment in history the opportunities failure creates will be best realized as a personal choice, as a stratagem that you adopt to make decisions about what outlier to investigate, about which crazy project you’ll keep going a little longer than might be advisable. It is a momentary kind of subterfuge, the secret drawer where you keep your unfundable, but not unloved, ideas. You know, that drawer that can be opened only by nudging out one side first and then the other, and back and forth. Paying attention to failures—not for the purpose of correcting them—but because of the interesting things they have to say, because they are humbling and make you go back and reconsider your long-held views. No failure is too small to be ignored or go unregarded.

A key breakthrough in the discovery of a family of enzymes known as G-proteins was made by eventually realizing that the dishwashing soap used on the experimental glassware was adding trace amounts of aluminum, and that this was a crucial cofactor in the G-protein’s activation. No one would have suspected any such thing. It caused years of frustrating failures of many experiments, but it finally led to one the most important discoveries in pharmacology—and a Nobel Prize. This is only one of hundreds, if not thousands, of such stories, big and small, about productive failures that led to an otherwise unconsidered finding.

Now and again you have to venture out into the darkness, beyond the pool of light, where things are shadowy and the likelihood of failure is high.

Of course the problem is that we have stories only about the failures that eventually led to a success. That’s not because they’re necessarily a better kind of failure but rather because those are the kinds of stories we tell, so that’s the data we have. Yes, there are cases where a failure simply says, oops, wrong tree (to bark up); let’s move on. And they are not without value. They can be just as elegant and creative and thoughtful as things that worked out. They deserve a place of honor—and we’ll get to that issue shortly.



It may be harder to recognize the intrinsic value of failure when it eventually results in a success, in the sense of a positive finding, such as the identification of the G-protein. But there are two ways failures have an intrinsic value beyond the correction they provide. The first, and perhaps obvious one, is that there is no way to predict which way they will turn out. They may lead to a success, or they may hurtle down a cul-de-sac. Or, more often, they may lead to a partial success that will fail again a little further down the road, leading to another correction. This iterative process—weaving from failure to failure, each one being sufficiently better than the last—is how science often progresses.

Failures also don’t just lead to a discovery by providing a correction (e.g., control for aluminum in glassware by using plastic); they lead to a fundamental change in the way we think about future experiments as well—and, in this case, the way we think about enzymes and how they work and how to discover them. So now we know that trace metals (the list, to date, includes copper, iron, magnesium, zinc, and others) in vanishingly small quantities are important in proper enzyme function. And they may come from unexpected places, like glassware. This failure then is data. That the experiments eventually worked because the aluminum was controlled for actually serves to confirm the failure. We don’t think of confirming failures, but that’s what we often do. Then we selectively remember the success, since it’s such a relief, and the failure goes unsung.

It’s not just young scientists who have become failure-averse, although it is most painful to see it happen there. As your career moves on and you have to obtain grant support you naturally highlight the successes and propose experiments that will continue this successful line of work with its high likelihood of producing results. The experiments in the drawer get trotted out less frequently and eventually the drawer just sticks shut. The lab becomes a kind of machine, a hopper—money in, papers out.

My hope of course is that things won’t be this way for long. It wasn’t this way in the past, and there is nothing at all about science and its proper pursuit that requires a high success rate or the likelihood of success, or the promise of any result. Indeed, in my view these things are an impediment to the best science, although I admit that they will get you along day to day. It seems to me we have simply switched the priorities. We have made the easy stuff—running experiments to fill in bits of the puzzle—the standard for judgment and relegated the creative, new ideas to that stuck drawer. But there is a cost to this. I mean a real monetary cost because it is wasteful to have everyone hunting in the same ever-shrinking territory. Yes, we have that old saw about looking under the lamppost because the light is better there, but now and again you have to venture out into the darkness, beyond the pool of light, where things are shadowy and the likelihood of failure is high. But that’s the only way the circle of light can expand.

I attended a seminar on research trends in Alzheimer’s disease where neurologist David Teplow of UCLA showed a graph of the numbers of papers published, beginning just before 2000, on something known as “Aβ protein.” This was when a couple of labs published some work suggesting that Aβ (said as A-beta) was an important contributor to Alzheimer’s disease. Indeed, they claimed that it was the causative factor. Within months, and continuing still, there has been an exponential increase in papers about Aβ. From a few citations a year, the Aβ protein now appears in over 5,000 papers per year! It has turned out, in all likelihood, to be a chase after a phantom—if the idea was that ridding a patient of Aβ would cure Alzheimer’s. It’s as in the Chinese proverb where the first dog barks at something and a hundred bark at his sound. But this bandwagon effect, in which some finding gets published in a high-profile journal and everybody goes chasing after it, can be seen in virtually every field of science.

And it is a chase. Not a thoughtful exploration. Not an attempt to unravel a mystery. Not even the “new” and promising line of research it is advertised as being. Mostly it is a direction that is suddenly on the funding map and therefore one to pursue. By the way, to keep the record straight, Aβ is surely involved in Alzheimer’s disease, but it is no longer considered the causative factor by most researchers and has even been found to serve a beneficial purpose in normal brains. Generally it is considered unlikely to be a good candidate as a drug or treatment target. Don’t even ask what this all cost.

How will this change? It will happen when we cease, or at least reduce, our devotion to facts and collections of them, when we decide that science education is not a memorization marathon, when we—scientists and nonscientists—recognize that science is not a body of infallible work, of immutable laws and facts. When we once again recognize that science is a dynamic and difficult process and that most of what there is to know is still unknown. When we take the advice of Samuel Beckett and try to fail better.

How long will this take to change? I think it will require the kind of revolutionary change in our thinking about science comparable to what Thomas Kuhn famously, if perhaps a bit inaccurately, identified as a paradigm shift, a revolutionary change in perspective. However, it is my opinion that revolutionary changes often happen faster than “organic” changes. They may seem improbable or even impossible, but then once the first shot is fired, change occurs rapidly. I think of various human rights movements—from civil rights for black Americans to universal suffrage for women. Unthinkable at first, and then unfathomable as to what took so long. The sudden fall of the supposedly implacable Soviet Union is another example of how things can happen quickly when they involve a deep change in the way we think. I’m not sure what the trigger will be in the case of science, but I suspect it will have to do with education—an area that is ripe for a paradigmatic shift on many levels, and where the teaching of science and math is virtually a poster child for wrongheaded policies and practices.

Max Planck, when asked how often science changes and adopts new ideas, said, “with every funeral.” And for better or worse, they happen pretty regularly.





Stuart Firestein is a professor of neuroscience in the Department of Biological Sciences at Columbia University. He is a fellow of the American Association for the Advancement of Science, a Guggenheim Fellow, and serves as an advisor to the Alfred P. Sloan Foundation.





Excerpt from Failure: Why Science is So Successful, by Stuart Firestein. Oxford University Press, 2015. Copyright © 2015. Reprinted with permission.







*As originally published, this photo caption stated that it was taken in 1997. As Beckett died in 1989, this is impossible.