I’ve been having a fascinating debate on Twitter with my friend Michael Nielsen, a quantum physicist who’s currently doing research at Y-combinator. It all started when I argued that, on the margin, people having more accurate beliefs would be useful, and Michael objected:

I think you’re over-rating accuracy / truth as a primary goal… I think there’s a tension between behaviours which maximize accuracy & which maximize creativity. Can’t always have both. A lot of important truths come from v. irrational ppl.

We continued the debate in this thread, where Michael basically argues that there isn’t enough experimentation with “crazy” ideas in science and tech, and says,

Insofar as long-held suspension of skepticism helps overcome this, I’m in favour of it. Indeed, that long-held suspension of skepticism seems to be one of the main mechanisms we currently have for this.

Here’s my reaction, written as a direct response to Michael.

I totally agree that we need more experimentation with “crazy ideas”! I’m just skeptical that rationality is, on the margin, in tension with that goal. For two main reasons:

1. In general, I think overconfidence stifles experimentation.

Most people look at a “crazy idea” — like seasteading — and say: “That’s obviously dumb and not worth trying, lol, you morons.”

In my experience, rationalists* are far more likely to look at that crazy idea and say: “Well, my inside view says that’s dumb. But my outside view says that brilliant ideas often look dumb at first, so the fact that it seems dumb isn’t great evidence about whether it will pan out. And when I think about the EV here [expected value] it seems clearly worth the cost of someone trying it, even if the probability of success is low.”

I think the first group — the vast majority of society — is being very overconfident. Remember, “overconfidence” doesn’t just mean being too confident that something’s going to succeed, it also means being too confident that something’s going to fail!

And I think it’s their overconfidence (plus a lack of thinking in terms of EV or marginal value) that punishes experimentation with low-probability but high-EV ideas. People mock the ideas, won’t fund them, etc.

2. I’m not sure (long-term) overconfidence is the best way to motivate innovators.

You might object that, okay, yes, maybe we want funders to think like rationalists, but we still need the innovators themselves to be overconfident so that they’re motivated to pursue their low-probability but high-EV ideas.

Possibly! I wouldn’t be shocked if this turned out to be true. But long-term overconfidence does come with costs, and my (weak) suspicion is that there are less costly ways to motivate oneself to pursue crazy ideas.

For example, you can get better at tying your motivation to EV, not to probability. I know a bunch of rationalists who are worried about some global catastrophic risk (e.g., a pandemic) and who are working on strategies for guarding against that risk — even though they think that their chance of success is low! They just think it’s high EV and therefore totally worth trying.

I also like the strategy of temporarily suspending your disbelief and throwing yourself headlong into something for a while, allowing your emotional state to be as if you were 100% confident. My friend Spencer Greenberg, a mathematician running a startup incubator, is very pro-calibration and rationality in general, but finds this temporary “overconfidence” very useful.

…I use quotes around overconfidence there because I don’t know if that tactic really counts as epistemic overconfidence — it reminds me more of the state of suspended disbelief we slip into during movies, where we allow our emotions to respond as if the movie were real. But that doesn’t really mean we “believe” the movie is real in the same way that we believe we’re sitting in a chair watching a movie.

Anyway, whether you want to refer to that as temporary irrationality, or not, doesn’t matter too much to me. The crucial point is that Spencer pops out of it occasionally to determine if it’s worth continuing down his current path.

But, sure, I’ll acknowledge that I don’t actually know whether these two strategies are feasible for most innovators. Maybe they only work well for a small subset of people, and for most innovators, the actual choice they face is between overconfidence or inaction. If that’s the world we live in, then I’d agree with the claim you seemed to be making on Twitter, that irrationality (at least on the part of the innovators themselves) is essential to innovation.

One last point: Even if it turned out to be true that irrationality is necessary for innovators, that’s only a weak defense of your original claim, which was that I’m significantly overrating the value of rationality in general. Remember, “coming up with brilliant new ideas” is just one domain in which we could evaluate the potential value-add of increased rationality. There are lots of other domains to consider, such as designing policy, allocating philanthropic funds, military strategy, etc. We could certainly talk about those separately; for now, I’m just noting that you made this original claim about the dubious value of rationality in general, but then your argument focused on this one particular domain, innovation.

*Yeah, yeah, the label “rationalist” isn’t a great one, but no one’s yet found a better alternative! “People who substantially agree about certain principles of epistemology including…” is more accurate, but not exactly sticky.