* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

David Aldous, January 2009.

Preamble. On a first reading I was unable to organize my thoughts well enough to compose a review. Eighteen months later, motivated slightly by Taleb's rise to celebrity guru status but more by hearing the young man behind the counter of my local cheese shop give his opinions of the book, I was inspired to actually finish a review. There are of course other reviews by academics -- see e.g. the three American Statistician articles by Lund and Westfall - Hilbe and Brown, and Andrew Gelman's blog -- but mine is aimed more at conceptual than technical issues.

Taleb, Nassim Nicholas. The Black Swan: The impact of the highly improbable. Random House, 2007.

A "Black Swan" is defined as an event characterized [p. xviii] by rarity, extreme impact, and retrospective (though not prospective) predictability, and Taleb's thesis is that such events have much greater effect, in financial markets and the broader world of human affairs, than we usually suppose. The book is challenging to review because it requires considerable effort to separate the content from the style. The style is rambling and pugnacious -- well described by one reviewer as "with few exceptions, the writers and professionals Taleb describes are knaves or fools, mostly fools. His writing is full of irrelevances, asides and colloquialisms, reading like the conversation of a raconteur rather than a tightly argued thesis." And clearly this is perfectly deliberate. My own overall reaction is that Taleb is sensible (going on prescient) in his discussion of financial markets and in some of his general philosophical thought, but tends toward irrelevance or ridiculous exaggeration otherwise. Let me run through some discussion topics, first 6 where I broadly agree with Taleb, then 6 where I broadly disagree, then 4 final thoughts.

(1) [p. 286] The sterilized randomness of games does not resemble randomness in real life; thinking it does constitutes the Ludic Fallacy (his neologism). This is exactly right, and mathematicians should pay attention. In my own list of 100 instances of chance in the real world, exactly 1 item is "Explicit games of chance based on artifacts with physical symmetry -- exemplified by dice, roulette, lotteries, playing cards, etc".

(2) Taleb is dismissive of prediction and models (explicitly in finance and econometrics, and implicitly almost everywhere). For instance, [p. 138] Why on earth do we predict so much? Worse, even, and more interesting: why don't we talk about our record in predicting? Why don't we see how we (almost) always miss the big events? I call this the scandal of prediction. And [p. 267] In the absence of a feedback mechanism [not making decisions on the basis of data] you look at models and think they confirm reality. He's right; people want forecasts in economics, and so economists give forecasts, even knowing they're not particularly accurate. Also, the culture of academic research in numerous disciplines encourages theoretical modeling which is never seriously compared with data.

(3) Taleb is scathing about stock prediction models based on Brownian motion (Black-Scholes and variants) and of the whole idea of measuring risk by standard deviation: [p. 232] You cannot use one single measure for randomness called standard deviation (and call it "risk"); you cannot expect a simple answer to characterize uncertainty. And [p. 278] if you read a mutual fund prospectus, or a description of a hedge fund's exposure, odds are that it will supply you ....... with some quantitative summary claiming to measure "risk". That measure will be based on one of the above buzzwords [sigma, variance, standard deviation, correlation, R square, Sharpe ratio] derived from the bell curve and its kin ........ If there is a problem, they can claim that they relied on standard scientific method.

(4) Ask someone what happened in a movie they've just watched; their answer will not be just a list (this happened; then this happened; then this happened .....) but will also give reasons (he left town because he thought she didn't love him, .....). We habitually think about the past in this way -- events linked by causal explanations -- partly to make it easier to remember. As Taleb writes [p 73] narrativity causes us to see past events as more predictable, more expected, and less random than they actually were .... . This Narrative Fallacy observation is well worth keeping in mind, when seeking to use the past to predict the future.

(5) Chapter 3 introduces neologisms Mediocristan and Extremistan for settings where (in technical statistics terms) outcomes do [do not] have finite variance. His writing is lively and memorable, and his examples are apposite, so that it would make a useful reading accompaniment to a technical statistics course (though as indicated below I disagree with his interpretation of the relative significance of the two categories).

(6) Given that Taleb's thesis is already well expressed by the bumpersticker "Expect the unexpected", what more is there to say? Well, actually he makes several memorable points, such as his summary [p. 50] of themes related to Black Swans:

(a) We focus on preselected segments of the seen and generalize from it to the unseen: the error of confirmation.

(b) We fool ourselves with stories that cater to our Platonic thirst for distinct patterns: the narrative fallacy.

(c) We behave as if the Black Swan does not exist; human nature is not programmed for Black Swans.

(d) What we see is not necessarily all that is there. History hides Black Swans from us [if they didn't happen] and gives a mistaken idea about the odds of these events: this is the distortion of silent evidence.

(e) We "tunnel": that is, we focus on a few well-defined sources of uncertainty, on too specific a list of Black Swans (at the expense of others that do not come so readily to mind).

And here is his investment strategy [p. 295-6].

Half the time I am hyperconservative in the conduct of my own [financial] affairs; the other half I am hyperaggressive. This may not seem exceptional, except that my conservatism applies to what others call risk-taking, and my aggressiveness to areas where others recommend caution. I worry less about small failures, more about large, potentially terminal ones. I worry far more about the ``promising" stock market, particularly the "safe" blue chip stocks, than I do about speculative ventures -- the former present invisible risks, the latter offer no surprises since you know how volatile they are and can limit your downside by investing smaller amounts ....... In the end this is a trivial decision making rule: I am very aggressive when I can gain exposure to positive Black Swans -- when a failure would be of small moment -- and very conservative when I am under threat from a negative Black Swan. I am very aggressive when an error in a model can benefit me, and paranoid when an error can hurt. This may not be too interesting except that it is exactly what other people do not do. In finance, for instance, people use flimsy theories to manage their risks and put wild ideas under "rational" scrutiny.

Maybe not easy for you or me to emulate, but surely conceptually useful for us to keep in mind.

Criticisms

Extremistan is sometimes dramatic; Mediocristan is never dramatic. But this has no necessary connection with quantitative impact.

Our minds focus on variability. Extremistan is, by definition, more variable than Mediocristan, so it attracts relatively more of our attention. But this has no necessary connection with quantitative impact.

(8) In other words the whole Extremistan metaphor, suggesting a country in which everything is ruled by power laws, is misleading. A better metaphor is an agora, a marketplace, which is a useful component of a city but is surrounded by other components with different roles. This provides a segue to a quotable proclamation of my own.

Financial markets differ from casinos in many ways, but they are almost equally unrepresentative of the operation of chance in other aspects of the real world. Thinking otherwise is the Agoran fallacy.

Most of the differences in life experience from one generation to the next are the cumulative results of slow changes which do not have much impact on a typical individual and therefore which we don't pay much attention to. Of course in the long term the nature, time of origination and duration of slow trends is unpredictable -- but it is this, not Black Swans, which actually constitute long-term unpredictability.

(9) The word prediction has a range of meaning. Stating "Microsoft stock will rise about 20% next year" is a deterministic prediction, whereas stating your opinion about the stock's performance as a probability distribution is a statistical prediction. Any attempt by a reader to make more precise sense of Taleb's rhetoric about prediction requires the reader to keep firmly in mind which meaning is under discussion, since Taleb isn't careful to do so. For instance, Taleb discusses [p. 150] data that security analysts predictions are useless, as if this were a novel insight of his. But in this setting he is talking about deterministic prediction, and he is just repeating a central tenet of 30 years of academic theory (the efficient market hypothesis and all that), not to mention the classic best-seller "A Random Walk Down Wall Street". On the other hand, the standard mathematical theory of finance starts with some statistical assumption -- that prices will move like Brownian motion or some variant. Taleb's criticisms of this theory -- that it ignores Black Swans, and that future probabilities are intrinsically impossible to assess well -- have considerable validity but he doesn't make sufficiently clear the distinction between this and traditional stockbroker advice,

(10) A book on (say) the impact of Empires on human history might be expected to contain an explicit list of entities the author considered as Empires; that way, a reader could analyze any asserted generality about empires by pondering whether it applied to at least most empires on the list. Similarly, one might expect this book to contain some explicit list of past events the author considered Black Swans (here I am thinking of unique Black Swans, not Gray Swans). But it doesn't; various instances are certainly mentioned, but mostly via asides and anecdotes. If you read the book and extracted the mentioned instances, and then read it again to see how much of the material was directly relevant to most of the listed Black Swans, then it would be a very small proportion. In other words, the summary (6) of Taleb's views is interesting, but instead of expanding the summary to more concrete and detailed analysis, the book rambles around scattered philosophical thoughts.

(11) The style of Taleb's philosophizing can been seen in the table [p. 284] "Skeptical Empiricism vs Platonism", where he writes a column of ideas that he explicitly identifies with, and contrasts with another column that no-one would explicitly identify with. This is straw man rhetoric. Indeed much of the book is rhetoric about empiricism, with a remarkable lack of actual empiricism, i.e. rational argument from data.

(12) This love of rhetoric causes Taleb to largely ignore what I would consider interesting philosophical questions related to Black Swans. Here are two such. There are a gazillion things we might think about during a day, but (unlike a computer rebooting) we don't wake up, run through the gazillion, and consciously choose which to actually think about. For obvious reasons, in everyday life this issue

what comes to one's conscious attention as matters one might want to think about?

Second, it is easy to cite, say, [p. xviii] the precipitous demise of the Soviet bloc as having been unpredictable, but what does this mean? If you had asked an expert in 1985 what might happen to the USSR over the next 10 years -- "give me a range of possibilities and a probability for each" -- then they would surely have included something like "peaceful breakup into constituent republics" and assigned it some small probability. What does it mean to say such a prediction is right or wrong? In 2008, the day before John McCain was scheduled to announce his VP choice, the Intrade prediction market gave Sarah Palin a 4% chance. Was this right or wrong? Unlikely events will sometimes happen just by chance. Taleb's whole thesis is that experts and markets do not assess small probabilities correctly, but he supports it with anecdote and rhetoric, not with data and analysis.

Four final thoughts

(14) Taleb often seems to imagine that the views he disagrees with come from some hypothetical FINANCIAL MATH 101 course, though in one case it was an actual course: [p 278] It seemed better to teach [MBA students at Wharton] a theory based on the Gaussian than to teach them no theory at all. It is easy to criticise introductory courses in any subject as concentrating on some oversimplified but easy-to-explain theory which is not so relevant to reality (e.g. many introductory statistics courses exaggerate the relevance and scope of tests of significance; physics courses say more about gravity than about friction). It is much harder to rewrite such a course to make it more realistic without degenerating into vague qualitative assertions or scattered facts.

(15) I am always puzzled that writers on financial mathematics (Taleb included) tend to ignore what strikes me as the most important insight that mathematics provides. Common sense and standard advice correctly emphasize a trade-off between short term risk and long term reward, but implicitly suggest this spectrum goes on forever. But it doesn't. At least, if one could predict probabilities accurately, there is a "Kelly strategy" which optimizes long-term return while carrying a very specific (high but not infinite) level of short term risk, given by the remarkable formula

with chance 50% [or 25% or 10%] your portfolio value will sometime drop below 50% [or 25% or 10%] of its initial value.

(16) My own investment philosophy, as someone who devotes 3 hours a year to his investments, is