I.

Writing a review of The Black Swan is a nerve-wracking experience.

First, because it forces me to reveal I am about ten years behind the times in my reading habits.

But second, because its author Nassim Nicholas Taleb is infamous for angry Twitter rants against people who misunderstand his work. Much better men than I have read and reviewed Black Swan, messed it up, and ended up the victim of Taleb’s acerbic tongue.

One might ask: what’s the worst that could happen? A famous intellectual yells at me on Twitter for a few minutes? Isn’t that normal these days? Sure, occasionally Taleb will go further and write an entire enraged Medium article about some particularly egregious flub, but only occasionally. And even that isn’t so bad, is it?

But such an argument betrays the following underlying view:

It assumes that events can always be mapped onto a bell curve, with a peak at the average and dropping off quickly as one moves towards extremes. Most reviews of Black Swan will get an angry Twitter rant. A few will get only a snarky Facebook post or an entire enraged Medium article. By the time we get to real extremes in either directions – a mere passive-aggressive Reddit comment, or a dramatic violent assault – the probabilities are so low that they can safely be ignored.

Some distributions really do follow a bell curve. The classic example is height. The average person is about 5’7. The likelihood of anyone being a different height drops off dramatically with distance from the mean. Only about one in a million people should be taller than 7 feet; only one in a billion should be as tall as 7’5. Nobody is order-of-magnitude differences in height from anyone else. Taleb calls the world of bell curves and minor differences Mediocristan. If Taleb’s reaction to bad reviews dwells alongside height in Mediocristan, I am safe; nothing an order-of-magnitude difference from an angry Twitter rant is likely to happen in entire lifetimes of misinterpreting his work.

But other distributions are nothing like a bell curve. Taleb cites power-law distributions as an example, and calls their world Extremistan. Wealth inequality lives in Extremistan. If wealth followed a bell curve around the median household income of $57,000, and a standard deviation scaled the same way as height, then a rich person earning $70,000 would be as remarkable as a tall person hitting 7 feet. Someone who earned $76,000 would be the same kind of prodigy of nature as the 7’6 Yao Ming. Instead, people earning $70,000 are dirt-common, some people earn millions, and the occasional tycoon can make hundreds of millions of dollars per year. In Mediocristan, the extremes don’t matter; in Extremistan, sometimes only the extremes matter. If you have a room full of 99 average-height people plus Yao Ming, Yao only has 1.3% of the total height in the room. If you have a room full of 99 average-income people plus Jeff Bezos, Bezos has 99.99% of the total wealth.

Here are Taleb’s potential reactions graphed onto a power-law distribution. Although the likelihood of any given reaction continues to decline the further it is away from average, it declines much less quickly than on the bell curve. Violent assault is no longer such a remote possibility; maybe my considerations should even be dominated by it.

So: are book reviews in Mediocristan or Extremistan?

I notice this BBC article about an author who hunted down a bad reviewer of his book and knocked her unconscious with a wine bottle. And Lord Byron wrote such a scathing meta-review of book reviewers that multiple reviewers challenged him to a duel, but the duel seems to have never taken place, plus I’m not sure Lord Byron is a good person to generalize from.

19th century intellectuals believed a bad review gave John Keats tuberculosis; they were so upset about this that they used his gravestone to complain:

Keats’ friend Shelley wrote the poem Adonais to memorialize the event, in which he said of the reviewer:

Our Adonais has drunk poison—oh!

What deaf and viperous murderer could crown

Life’s early cup with such a draught of woe?

The nameless worm would now itself disown:

It felt, yet could escape, the magic tone

Whose prelude held all envy, hate and wrong,

But what was howling in one breast alone,

Silent with expectation of the song,

Whose master’s hand is cold, whose silver lyre unstrung.

So are book reviews in Mediocristan or Extremistan? Well, every so often your review causes one of history’s greatest poets to die of tuberculosis, plus another great poet writes a five-hundred-line poem condemning you and calling you a “nameless worm”, and it becomes a classic that gets read by millions of schoolchildren each year for centuries after your death. And that’s just the worst thing that’s happened because of a book review so far. The next one could be even worse!

II.

This sounds like maybe an argument for inaction, but Taleb is more optimistic. He points out that black swans are often good. For example, pharma companies usually just sit around churning out new antidepressants that totally aren’t just SSRI clones they swear. If you invest in one of these companies, you may win a bit if their SSRI clone succeeds, and lose a bit if it fails. But drug sales fall on a power law; every so often companies get a blockbuster that lets them double, triple, or dectuple their money. Tomorrow a pharma company might discover the cure for cancer, or the cure for aging, and get to sell it to everyone forever. So when you invest in a pharma company, you have randomness on your side: the worst that can happen is you lose your money, but the best that can happen is multiple-order-of-magnitude profits.

Taleb proposes a “barbell” strategy of combining some low-risk investments with some that expose you to positive black swans:

If you know that you are vulnerable to prediction errors, and if you accept that most “risk measures” are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills—as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital-style portfolios.* That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor,” the nest egg that you have in maximally safe investments. Or, equivalently, you can have a speculative portfolio and insure it (if possible) against losses of more than, say, 15 percent. You are “clipping” your incomputable risk , the one that is harmful to you. Instead of having medium risk, you have high risk on one side and no risk on the other. The average will be medium risk but constitutes a positive exposure to the Black Swan […] The “barbell” strategy [is] taking maximum exposure to the positive Black Swans while remaining paranoid about the negative ones. For your exposure to the positive Black Swan, you do not need to have any precise understanding of the structure of uncertainty. I find it hard to explain that when you have a very limited loss you need to get as aggressive, as speculative, and some­times as “unreasonable” as you can be.

So: how good can a book review get?

Here’s a graph of all the book reviews I’ve ever done by hit count (in thousands). I’m not going to calculate out, but it looks like a power law distribution! Some of my book reviews have been pretty successful – my review of Twelve Rules got mentioned in The Atlantic. Can things get even better than that? I met my first serious girlfriend through a blog post. Can things get even better than that? I had someone tell me a blog post on effective altruism convinced them to pledge to donate 10% of their salary to efficient charities forever; given some conservative assumptions, that probably saves twenty or thirty lives. So a book review has a small chance of giving a great poet tuberculosis, but also a small chance of saving dozens of lives. Overall it probably seems worth it.

III.

The Black Swan uses discussions of power laws and risk as a jumping-off point to explore a wider variety of topics about human fallibility. This places it in the context of similar books about rationality and bias that came out around the same time. I’m especially thinking of Philip Tetlock’s Superforecasting, Nate Silver’s The Signal And The Noise, Daniel Kahneman’s Thinking Fast And Slow, and of course The Sequences. The Black Swan shares much of its material with these – in fact, it often cites Kahneman and Tetlock approvingly. But aside from the more in-depth discussion of risk, I notice two important points of this book Taleb keeps coming back to again and again, which as far as I know are unique to him.

The first is “the ludic fallacy”, the false belief that life works like a game or a probability textbook thought experiment. Taleb cautions against the (to me tempting) mistake of comparing black swans to lottery tickets – ie “investing in pharma companies is like having a lottery ticket to win big if they invent a blockbuster”. The lottery is a game where you know the rules and probabilities beforehand. The chance of winning is whatever it is. The prize is whatever it is. You know both beforehand; all you have to do is crunch the numbers to see if it’s a good deal.

Pharma – and most other real-life things – are totally different. Nobody hands you the chance of a pharma company inventing a blockbuster drug, and nobody hands you the amount of money you’ll win if it does. There is Knightian uncertainty – uncertainty about how much uncertainty there is, uncertainty that doesn’t come pre-quantified.

Taleb gives cautionary examples of what happens if you ignore this. You make some kind of beautiful model that tells you there’s only a 0.01% chance of the stock market doing some particular bad thing. Then you invest based on that data, and the stock market does that bad thing, and you lose all your money. You were taking account of the quantified risk in your model, but not of the unquantifiable risk that your model was incorrect.

In retrospect, this is an obvious point. But it’s also obvious in retrospect that everything classes teach about probability fall victim to it, to the point where it’s hard to even think about probability in non-ludic terms. I keep having to catch myself writing some kind of “Okay, assume the risk of a Black Swan is 10%…” example in this review, because then I know Taleb will hunt me down and violently assault me. But it’s hard to resist.

I would like to excuse myself by saying it’s impossible to discuss probability without these terms, or at least that you have to start by teaching these terms and then branch into the real-world unquantifiable stuff, except that Taleb managed to write his book without doing either of those things. Granted, the book is a little bit weird. You could go through several chapters on the Lebanese Civil War or whether the French Third Republic had the best intellectuals, without noticing it was a book on probability. Nevertheless, it sets itself the task of discussing risk without starting with the ludic fallacy, and it succeeds.

I don’t know to what degree the project of “becoming well-calibrated with probabilities” is a solution to the ludic fallacy, or a case of stupidly falling victim to the ludic fallacy.

The second key concept of this book – obviously not completely original to Taleb, but I think Taleb gives it a new meaning and emphasis – is “Platonicity”, the anti-empirical desire to cram the messy real world into elegant theoretical buckets. Taleb treats the bell curve as one of the clearest examples; it’s a mathematically beautiful example of what certain risks should look like, so incompetent statisticians and economists assume that risks in a certain domain do fit the model.

He ties this into Tetlock’s “fox vs. hedgehog” dichotomy. The prognosticators who tried to fit everything to their theory usually did badly; the ones who accepted the complexity of reality and maintained a toolbox of possibilities usually did better.

He also mentions – and somehow I didn’t know this already – that modern empiricism descends from Sextus Empiricus, a classical doctor who popularized skeptical and empirical ideas as the proper way to do medicine. Sextus seems like a pretty fun guy; his surviving works include Against The Grammarians, Against The Rhetoricians, Against The Geometers, Against The Arithmeticians, Against The Astrologers, Against The Musicians, Against The Logicians, Against The Physicists, and Against The Ethicists. Medicine is certainly a great example of empiricism vs. Platonicity, with Hippocrates and his followers cramming everything into their preconceived Four Humors model – to the detriment of medical science – for thousands of years.

But Empiricus’ solution – to not hold any beliefs, and to act entirely out of habit – falls short. And I am not sure I understood what Taleb is arguing for here. There’s certainly a true platitude in this area (wait, does “platitude” share a root with “Platonic”? It looks like both go back to a Greek word meaning “broad”, but it’s not on purpose. Whatever.) of “try to go where the evidence guides you instead of having prejudices”. But there’s also a point on the other side – unless you have some paradigm to guide you, you exist in a world of chaotic noise. I am less sanguine than Taleb that “be empiricist, not theoretical” is sufficient advice, as opposed to “find the Golden Mean between empiricism and theory” – which is of course a much harder and more annoying adage, since finding a Golden Mean isn’t trivial.

That is – what would it mean for a doctor to try to do medicine without the “theory” that the heart pumped the blood? You’d find a patient with all the signs of cardiogenic shock, and say “Eh, I dunno. Maybe I should x-ray his feet, or something?” What if she had no preconceived ideas at all? Would she start reciting Sanskrit poetry, on the grounds that there’s no reason to think that would help more or less than anything else? Whereas a doctor who had read a lot of medical school textbooks – Taleb hates textbooks! – would immediately recognize the signs of cardiogenic shock, do the tests that the textbooks recommend, and give the appropriate treatment.

Yes, eventually an empiricist doctor would notice empirical facts that made her believe the heart pumped blood (and all the other true things). But then she would…write it down in a textbook. That’s what theories are – crystallized, compressed empiricism.

I think maybe Empiricus and Taleb would retort that some people form theories with only a smidgeon of evidence – I don’t know what evidence Hippocrates had for the Four Humors, but it clearly wasn’t enough. And then they stick to them dogmatically even when the evidence contradicts them. I agree with both criticisms. But then it seems like the problem is bad theories, rather than ever having theories at all. Four Humors Theory and Germ Theory are both theories – it’s just that one is wrong and the other is right. If nobody had ever been willing to accept the germ theory of disease, we’d be in a much worse place. And you can’t just say “Well, you could atheoretically notice antibiotics work and use them empirically” – much of the research into antibiotics, and the ways we use antibiotics, are in place because we more or less understand what they’re doing.

I would argue that Empiricus and Taleb are arguing not for experience over theory, but for the adjustment of certain parameters of inference – how much fudge factor we accept in compressing our data, how much we weight prior probabilities versus new evidence, how surprised to be at evidence that doesn’t fit our theories. I expect Empiricus, Taleb, and I are in agreement about which direction we want those parameters shifted. I know this sounds like a boring intellectual semantic point, but I think it’s important and occasionally saves your life if you’re practicing some craft like medicine that has a corpus of theory built up around it which you ignore at your peril.

(I also think they fail to understand the degree to which common sense is just under-the-hood inference in the same way that abstract theorizing is above-the-hood inference, and so doesn’t rescue us from these concerns).

Charitably, The Black Swan isn’t making the silly error of denying a Golden Mean of parameter position. It’s just arguing that most people today are on the too-Platonic side of things, and so society as a whole needs to shift the parameters toward the more-empirical side. Certainly this is true of most people in the world Nassim Nicholas Taleb inhabits. In Taleb’s world famous people walk around all day asserting “Everything is on a bell curve! Anyone who thinks risk is unpredictable is a dangerous heretic!” Then Taleb breaks in past their security cordon and shouts “But what if things aren’t on a bell curve? What if there are black swans?!” Then the famous person has a rage-induced seizure, as their bodyguards try to drag Taleb away. Honestly it sounds like an exciting life.

Lest you think I am exaggerating:

The psychologist Philip Tetlock (the expert buster in Chapter 10) , after listening to one of my talks, reported that he was struck by the presence of an acute state of cognitive dissonance in the audience. But how people resolve this cognitive tension, as it strikes at the core of everything they have been taught and at the methods they practice, and realize that they will continue to practice, can vary a lot. It was symptomatic that almost all people who attacked my thinking attacked a deformed version of it, like “it is all random and unpredictable” rather than “it is largely random,” or got mixed up by showing me how the bell curve works in some physical domains. Some even had to change my biography. At a panel in Lugano, Myron Scholes once got in to a state of rage, and went after a transformed version of my ideas. I could see pain in his face. Once, in Paris, a prominent member of the mathematical establishment, who invested part of his life on some minute sub-sub-property of the Gaussian, blew a fuse—right when I showed empirical evidence of the role of Black Swans in markets. He turned red with anger, had difficulty breathing, and started hurling insults at me for having desecrated the institution, lacking pudeur (modesty); he shouted “I am a member of the Academy of Science!” to give more strength to his insults.

One hazard of reviewing books long after they come out is that, if the book was truly great, it starts sounding banal. If its points were so devastating and irrefutable that they became universally accepted, then it sounds like the author is just spouting cliches. I think The Black Swan might have reached that level of influence. I haven’t even bothered explaining the term “black swan” because I assume every educated reader now knows what it means. So it seems very possible that pre-book society was so egregiously biased toward the Platonic theoretical side that it needed someone to tell it to shift in the direction of empiricism, Taleb did that, and now he sounds silly because everyone knows that you can’t just declare everything a bell curve and call it a day. Maybe this book should be read backwards. But the nature of all mental processes as a necessary balance between theory and evidence is my personal hobby-horse, just as evidence being good and theory being bad is Taleb’s personal hobby-horse, so I can’t let this pass without at least one hobby-cavalry-duel.

I have a more specific worry about skeptical empiricism, which is that it seems like an especially dangerous way to handle Extremistan and black swans.

Taleb memorably compares much of the financial world to “picking up pennies in front of a steamroller” – ie, it is very easy to get small positive returns most of the time as long as you expose yourself to horrendous risk.

EG: imagine living in sunny California and making a bet with your friend about the weather. Each day it doesn’t rain, he gives you $1. Each day it rains, you give him $1000. Your friend will certainly take this bet, since long-term it pays off in his favor. But for the first few months, you will look pretty smart as you pump him out of a constant stream of free dollars. Your stupidity will only become apparent way down the line, when one of the state’s rare rainstorms arrives and you’re on the hook for much more than you won.

Here the theorist will calculate the probability of rain, calculate everybody’s expected utility, and predict that your friend will eventually come out ahead.

But the good empiricist will just watch you getting a steady stream of free dollars, and your friend losing money every day, and say that you did the right thing and your friend is the moron!

More generally, as long as Black Swans are rare enough not to show up in your dataset, empiricists are likely to fall for picking-pennies-from-in-front-of-steamroller bets, whereas (sufficiently smart) theorists will reject them.

For example, Banker 1 follows a strategy that exposes herself terribly to black swan risk, and ensures she will go bankrupt as soon as the market goes down, but which makes her 10% per year while the market is going up. Banker 2 follows a strategy that protects herself against black swan risk, but only makes 8% per year while the market is going up. A naive empiricist will judge them by their results, see that Banker 1 has done better each of the past five years, and give all his money to Banker 1, with disastrous results. Somebody who has a deep theoretical understanding of the underlying territory might be able to avoid that mistake.

This problem also comes up in medicine. Imagine two different drugs. Both cure the same disease and do it equally well. Drug 1 has a side effect of mild headache in 50% of patients. Drug 2 has a side effect of death in 0.01% of patients. I think a lot of doctors test both drugs, find that Drug 2 always results in less hassle and happier patients, and stick with it. But this is plausibly the wrong move and a good understanding of the theory would make them much more cautious.

(yes, both of these examples are also examples of the ludic fallacy. I fail, sorry.)

Overall this seems like a form of Goodhart’s Law, where any attempt to measure something empirically risks having people optimize for your measurement in a way that makes all the unmeasurable things worse. Black swan risks are one example of an unmeasurable thing; you can’t really measure how common or how bad they are until they happen. If you focus entirely on empirical measurement, you’ll incentivize people to take any trade that improves ordinary results at the cost of increasing black swan risk later. If you want to prevent that, you need a model that includes the possibility of black swan risk – which is going to involve some theory.

Nassim Taleb has been thinking about this kind of thing his whole life and I’m sure he hasn’t missed this point. Probably we are just using terms differently. But I do think the way he uses terms minimizes concern about this type of error, and I do worry the damage can sometimes be pretty large.

IV.

I previously mentioned that The Black Swan seems to stand in the tradition of other rationality books like Thinking Fast And Slow and The Signal And The Noise. Is this a fair analysis? If so, what do we make of this tradition?

While Taleb has nothing but praise for eg Kahneman, his book also takes a very different tone. For one thing, it’s part-autobiography / diary / vague-thought-log of Taleb, who is a very interesting person. I read some reviews saying he “needed an editor”, and I understand the sentiment, but – does he? Yes, his book is weird and disconnected. It’s also really fun to read, and sold three million copies. If people who “need an editor” often sell more copies than people who don’t, and are more enjoyable, are we sure we’re not just arbitrarily demanding people conform to a certain standard of book-writing that isn’t really better than alternative standards? Are we sure it’s really true that you can’t just stick several chapters about the biography of a fake Russian author into the middle of your book for no reason, without admitting that it’s fake? Are you sure you can’t insert a thinly-disguised version of yourself into the story about the Russian author, have yourself be such a suave and attractive individual that she falls for you and you start a torrid love affair, and then make fun of her cuckolded husband, who is suspiciously similar to the academics you despise? Are you sure this is an inappropriate thing to do in the middle of a book on probability? Maybe Nate Silver would have done it too if he had thought of it first.

Also sort of surprising: Taleb hates nerds. He explains:

To set the terminology straight, what I call “a nerd” here doesn’t have to look sloppy, unaesthetic, and sallow, and wear glasses and a portable computer on his belt as if it were an ostensible weapon. A nerd is simply someone who thinks exceedingly inside the box. Have you ever wondered why so many of these straight-A students end up going nowhere in life while someone who lagged behind is now getting the shekels, buying the diamonds, and getting his phone calls returned? Or even getting the Nobel Prize in a real discipline (say, medicine)? Some of this may have something to do with luck in outcomes, but there is this sterile and obscurantist quality that is often associated with classroom knowledge that may get in the way of understanding what’s going on in real life. In an IQ test, as well as in any academic setting (including sports), Dr. John would vastly outperform Fat Tony. But Fat Tony would outperform Dr. John in any other possible ecological, real-life situation. In fact, Tony, in spite of his lack of culture, has an enormous curiosity about the texture of reality, and his own erudition—to me, he is more scientific in the literal, though not in the social, sense than Dr. John.

Going after nerds in your book contrasting Gaussian to power law distributions, with references to the works of Poincaré and Popper, is a bold choice. It also separates Taleb from the rest of the rationality tradition. I interpret eg The Signal And The Noise as pro-nerd. Its overall thesis is “Ordinary people are going around being woefully biased about all sorts of things. Good thing that bright people like Nate Silver can use the latest advances in statistics to figure out where they are going wrong, do the hard work of processing the statistical signal correctly, and create a brighter future for all of us.” Taleb turns that on its head. For him, ordinary people – taxi drivers, barbers, vibrant salt-of-the-earth heavily-accented New Yorkers – are the heroes, who know what’s up and are too sensible to go around saying that everything must be a bell curve, or that they have a clever theory which proves the market can never crash. It’s only the egghead intellectuals who could make such an error.

I am not sure this is true – my last New York taxi driver spent the ride explaining to me that he was the Messiah, which seems like an error on some important axis of reasoning that most intellectuals get right. But I understand that some of Taleb’s later works – Antifragile and Skin In The Game – may address more of what he means by this. It looks like Kahneman, Silver, et al are basically trying to figure out what doing things optimally would look like – which is a very nerdy project. Taleb is trying to figure out how to run systems without an assumption that you will necessarily be right very often.

I am reminded of the example of doctors being asked probability questions, about whether a certain finding on a mammogram implies X probability of breast cancer. The doctors all get this horribly wrong, because none of them ever learned anything about probability. But after getting every question on the test wrong, they will go and perform actions which are basically optimized for correctly diagnosing and treating breast cancer, even though their probability-related answers imply they should do totally different things.

I see Kahneman, Tetlock, Silver, and Yudkowsky as all being in the tradition of finding optimal laws of probability that point out why the doctors are wrong, and figuring out how to train doctors to answer probability questions right. I see Taleb as being on the side of the doctors – trying to figure out a system where the right decisions get made whether anyone has a deep mathematical understanding of the situation or not. Taleb appreciates the others’ work – you have to know something about probability before you can discuss why some systems tend towards getting it right vs. getting it wrong – but overall he agrees that “rationality is about winning” – the doctor who eventually gives the right treatment is better than a statistician who answers all relevant math questions correctly but has no idea what to do.

Relatedly, I think Taleb’s critique of nerds works because he’s trying to resurrect a Greco-Roman concept of the intellectual – arete and mens sana in corpore sano and all that – and clearly uses “nerd” to mean everything about modern faux-intellectuals that falls short of his vision. Thales cornering the market on olive presses is his kind of guy, and he doesn’t think that all of the people who have rage-induced seizures when he whispers the phrase “power law distribution” in their ears really cut it. His book is both a discussion of his own area of study (risk), and a celebration of and guide to what he thinks intellectualism should be. I might have missed the section of Marcus Aurelius where he talks about how angry Twitter rants are a good use of your time, but aside from that I think the autobiographical parts of the book make a convincing aesthetic argument that Taleb is living the dream and we should try to live it too.

Perhaps relating to this, of Taleb, Silver, Tetlock, Yudkowksy, and Kahneman, Taleb seems to have stuck around longest. All of them continue to do great object-level work in their respective fields, but it seems like the “moment” for books about rationality came and passed around 2010. Maybe it’s because the relevant science has slowed down – who is doing Kahneman-level work anymore? Maybe it’s because people spent about eight years seeing if knowing about cognitive biases made them more successful at anything, noticed it didn’t, and stopped caring. But reading The Black Swan really does feel like looking back to another era when the public briefly became enraptured by human rationality, and then, after learning a few cool principles, said “whatever” and moved on.

Except for Taleb. I’m excited to see he’s still working in this field and writing more books expanding on these principles. I look forward to reading the other books in this series.