The Stone is a forum for contemporary philosophers and other thinkers on issues both timely and timeless.

Here is something that might be worth selling your soul for: a magic black box that answers accurately any question you have about things to come. Who will be president in four years’ time? At what prices will the stock market close next Friday? What will be the precise position of the planets at midnight on that same day? When will I die?

You might think that with such a black box, life would be complete. Used intelligently, it would surely get you as much material and social success as you could bear to enjoy. But something would be missing.

The nature of that something is revealed by a certain temptation, a niggling urge to make a move that from a practical perspective seems idiotic: to take apart the amazing black box to see how it works. Dismantling it, you might destroy it; or at least, you might not be able to put it back together again. Still, the prick of curiosity pushes you on as you gingerly unscrew the top of the box to peer inside — just as a child might take apart a remote-controlled toy to see what internal wires were connected to what —a toy that never quite worked again.

The disappointed child and the deconstructed black box show that there is something we value over of predictive power: understanding. Science gives you the practical and the profound goods at the same time. For the more than $10 billion dollars it took to discover the Higgs boson using the 17-mile-long Large Hadron Collider in Switzerland, we got both insight into the nature of things —why particles have mass — and the promise, if somewhat distant, of many new technologies to come. Just how much of the ten billion was spent on pure understanding, and how much on more practically useful discoveries? It’s a crazy question. But sometimes, as in the case of the black box and the broken toy, understanding and usefulness split: you get one without the other. Which raises the question: Why do we want something beyond predictive power? Why is understanding the way the world works valuable in itself, above and beyond the engineering know-how that it brings?

To sort out these matters, begin with a question that we philosophers have been asking for eons: What is understanding? What are you after as you poke around inside the magic black prediction box? Or as you fire up the Large Hadron Collider, or sequence the chimpanzee genome, or question college students about fake biological scenarios to inquire into the nature of concepts?

A simple but plausible answer given by contemporary philosophers of science is as follows: to understand a phenomenon is to grasp how the phenomenon is caused.

To understand the northern lights, for example, is to understand how charged particles in the solar wind are guided to the earth’s magnetic poles, where colliding energetically with oxygen and nitrogen molecules they cause ionization that results in the emission of light. The guiding, the colliding, the ionizing, the emission are all causal processes; to see how these processes unfold is to understand the aurora.

Related More From The Stone Read previous contributions to this series.

Another example: to understand the black box’s magic is to know how it generates its predictions — to be familiar with the complex machinery that transforms the questions it accepts as inputs into the astoundingly accurate answers it produces as outputs. It is to gain such knowledge that you feel the urge to look inside.

We might learn the practical function of the quest for understanding, then, by learning the practical use of information about causal structure. On the face of things, it seems clear: if you know causal structure, you can (when conditions are right) make correct predictions. The knowledge that confers understanding also confers predictive power. That illuminating, exhilarating feeling of seeing into the universe’s secrets, then, might be Mother Nature’s way of enticing us to persist with the often tedious examination of predictively useful causal detail. Or that is the suggestion made by the developmental psychologist Alison Gopnik, who compares the role of the thrill of understanding in enlivening causal inquiry with that of the pleasures of orgasm, presumably put in place by evolution to work up an ongoing enthusiasm for the messy and dangerous yet biologically essential business of sex.

Suppose, for example, you know that combustion causes smoke. From this causal information, you can infer that where there is fire, there is smoke (and if other causes of smoke are rare, that where there’s smoke there’s fire). That gives you the ability to make useful predictions. So far, so good.

Not so fast, say some philosophers. Do you really need to think about causality to gain such an ability? Wouldn’t it be enough to know that fire is usually accompanied by smoke? From this purely statistical belief, this non-causal regularity, it seems that you can make just the same forecasts. In 1930s Vienna, the logical positivists — a group of philosophers led by Rudolf Carnap and inspired by Einstein’s theory of relativity and other advances in 20th-century physics — developed this point and pursued a program of eliminating talk of causality and other deep explanatory relations from science, on the grounds that they make no real contribution to a scientific theory’s predictive power.

If the positivists are right, there is no particular practical reason for us to get excited about causality, let alone to promote causal understanding as the highest of intellectual goods. It would be enough for Mother Nature, intent on building master predictors, to inculcate in us all a deep and fulfilling pleasure in the contemplation of statistics.

Anecdotal evidence suggests that we do not naturally find statistics in the least pleasurable. It is explanation by way of causation, rather than correlation, that gives us a mental rush. Perhaps the positivists themselves moved too fast.

What could they have missed? Is there some more subtle practical edge that the understander, the investigator of causal relations, has over the statistician? Perhaps causal knowledge gives you information useful in controlling nature. If you know that fire causes smoke, then you know that you can get rid of the smoke by quenching the fire, but you can’t get rid of the fire by gently fanning away the smoke. This useful item of knowledge does not follow from the purely statistical regularity that fire and smoke tend to appear together.

Even so, a statistician can make a good firefighter. All they need to do is to learn that quenching fire is followed by the disappearance of smoke, and not vice versa. Indeed, it is often only by learning such regularities — by experimenting with fire and smoke to see what happens to one when you remove the other —that we learn about the causal relations between fire and smoke in the first place. In that case, why bother to think causally? Once you have the regularities, which you need anyway, it is extra mental work with no practical reward.

Yet inquiry into the underpinnings of things is everywhere, in every culture, even in young children. Gopnik and her collaborators’ experiments show that three- and four-year-olds are as intent as scientists on exploring the causal structure of the world around them, in figuring out how it all works. Causal curiosity seems to be a part of human nature — an inclination that has, in spite of its cost, evolved because of its even greater benefits.

Trying to figure out what those benefits might be, I have argued in my own work that thinking causally rather than statistically inclines us to go about the business of learning regularities more efficiently and fruitfully. If you are curious about the way that fire causes smoke, for example, you will try to break down the process leading from fire to smoke into sub-processes, and you will try to break down fire and smoke themselves into smaller components — chemical reactions and clouds of particles. In the course of learning the regularities that connect these smaller pieces of the puzzle, and the relations between the pieces, you will come to learn relatively quickly sophisticated generalizations about the fire/smoke connection: for example, that smoke is associated with incomplete combustion, so that you can make more smoke — for signaling purposes, perhaps — by using damp fuel.

A statistician will, if they gather enough data and compute sufficiently many correlations, eventually discover the same thing. So technically, the positivists were right — thinking causally is not essential to thinking practically. But it is far faster: the spotlight of understanding illuminates, if I am correct, the most important pieces of a mind-numbingly vast web of statistical connections, enabling us to pick out subtle aspects of patterns of events that make a difference to real lives in real time. By looking more deeply into the workings of the black box, then, we can foresee twists and turns in the behaviors of things that we might otherwise have grasped only far too late to do us any good. Lifting the lid makes us smarter as well as wiser.



Photo

Michael Strevens teaches philosophy at New York University and writes about science, understanding, complexity and causation. His most recent book is “Tychomancy.”

