In the Age of Reason, we usually think objectivity is better than subjectivity. When we speak subjectively, we say more about ourselves than about anything else, telling how something makes us feel or revealing unintentionally how our bias filters our perception. Either way, subjective thinking seems self-indulgent, since we assume that few people are interested in our emotional states, that we should save our personal confessions for those who are part of our private life. By contrast, we think objectivity is a noble, eminently practical and even selfless discipline. Critical thinkers, mathematicians, and scientists are the knights of rationality, circumventing their personal preferences to understand the real world, the one that doesn’t depend on how we feel about it.





Take, for example, our taste in politics, art, or food. In these cultural areas, there are no objectively correct answers. George Lakoff, Jonathan Haidt and others have shown that liberals and conservatives have different gut reactions to moral questions. Different kinds of people prefer different things, depending on environmental pressures and people’s past experience which affects the development of their neural circuits. So when someone who prefers Thai food speaks about it, she’s expressing herself rather than talking just about the independent reality of Pad Thai. In fact, when we think objectively, we usually think the real world is value-neutral, that how things seem to us, as interpreted and filtered by our memories and moods, is an illusion. Fundamentally, the world is very different from how it seems to us when we’re mentally processing it, because we project meaning, purpose, and value onto everything—unless we’re attempting to be objective. We anthropomorphize impersonal regularities and even random fluctuations like the shapes of clouds, because we normally prefer to be social even when there’s no one else in sight.





How, though, do we learn the objective facts? We might think that objectivity is a matter of quieting the internal noise we generate, to let the real world speak for itself, as it were, as though logic and science were comparable to Buddhist meditation. But this isn’t how objectivity works. Were you to silence your inner narrative, to ignore your intuitions and dispositions, and then to look around at the world, you wouldn’t suddenly behold the mind-independent facts. On the contrary, your brain would subconsciously process information carried in light rays and in the vibration of air molecules, for example, producing the apparent world you perceive. The brain automatically transduces ambient signals into neural patterns that stand in as mental representations of the world. These representations are so interconnected that we move easily from one thought to an associated one, and are able to overlay our value-laden mental map onto the real world, which is why quieting the mind is such a challenge. Moreover, without our concepts for classifying things, we wouldn’t understand our sensations.





This was the main point of Immanuel Kant’s philosophy of knowledge. We don’t deal directly with mind-independent reality, because wherever we go we carry with us ourselves and thus our filters, habits, methods, and so forth. Even objectivity is a kind of knowledge and knowledge doesn’t just fall out of the sky, but lies at the end of a mental process. There would still be a world were there no living things, but that world would differ even from how it’s objectively represented. The world doesn’t speak for itself, after all; instead, it speaks through us, regardless of whether we’re being subjective or objective.

Logic, Math, and Science





Let’s look closer at what we bring to the table with objectivity. The simplest way to be objective is to follow the rules of critical thinking, to argue rather than just to assert our opinions. When we argue, we prove not just to others but to ourselves that our beliefs are rationally justified. We fortify our inferences, thinking only along broad paths, as it were, paths which we’ve found to be highly reliable routes to reality. In fact, the rules of deductive and inductive reasoning reflect the most general patterns to which events seem to conform, although to the extent that natural systems are chaotic (nonlinear, unstable, involving feedback loops), some other form of logic might be needed. For example, the law of noncontradiction says that nothing at all can both have some property and also lack that property. At least in everyday experience, we don't observe a violation of this law, but this doesn’t mean that nature speaks our language, that everything follows this law.





At best, whatever nature is doing, its processes correspond with our best ways of thinking. There’s still a gap there, because logic is a set of rules for thinking, not for being. Logic governs our thinking, but only ideally so since we’re free to be irrational. Natural processes just happen and so at best they coincidentally rather than inherently follow the ideals we set for cognition. By the Anthropic Principle, we can surmise that if the world didn’t agree with our thinking, if we couldn’t make use of our sensory information, we wouldn’t be long for the world and so there would be no such coincidence to explain. Moreover, the world appears logical because our concepts are flexible. Although no two eggs are exactly alike, for example, we classify them all under the same heading, because their differences are negligible for our purposes. Thus, we can enter “egg” into a deductive argument and know for certain that if all eggs are small and that that thing over there is an egg, that thing over there is small. Conversely, when we think of a unique individual, we ignore the thing's interconnectedness to other things. Logic sweeps under the rug nature’s fundamental lack of rational divisions, as revealed by the infinite variety and interconnectedness of natural processes, so that, strictly speaking, all of our classifications and rational judgments are fictions.





A more advanced form of objectivity is mathematical problem-solving, which uses highly abstract concepts. Math began as a relatively crude means of keeping track of land and food ownership for tax and other bureaucratic purposes. Numbers, systems of counting, algorithms, and many other tools of calculation were invented to make this kind of thinking more efficient. As the physicist Eugene Wigner pointed out in 1960, math is unreasonably effective in the sciences. Even the most exotic mathematical idealizations have been applied in physics. For example, Bernhard Riemann’s work on non-Euclidean geometry was used in Einstein’s theory of general relativity, and string theory includes mathematically precise ways of conceiving of extra dimensions. The reason this usefulness of math is “unreasonable” or mysterious is that there are no mathematical entities in the apparent world. For example, we never encounter a perfectly straight line, so again the idea of one is as fictional and idealistic as Harry Potter.





The more abstract the idea, the more concrete details it ignores and so the more unrealistic the idea becomes. Math is full of such unrealistic, otherworldly ideas. Although people have put math to work for thousands of years, mathematical precision can also be misleading. From Plato onwards, philosophers, mathematicians, and religious people generally have wondered whether the abstract worlds we imagine are more real than the material domain through which our bodies slog. Neoclassical economists drew up idealized models of how markets work, ignoring a great many concrete realities, and faith in the utility of those models is largely to blame for the Great Recession of 2008, when the reality of the American real estate market was unveiled. And turning to physics, string theory is so remote from experience that the theory seems unfalsifiable. So contrary to Wigner, what we gain in precision from mathematical idealizations, we may ultimately lose in applicability. Objectivity can become so abstract and the rules for working with the tools of rationality so arbitrary, that our supposedly fact-based thoughts become hard to distinguish from fantasies.





Combining logic and math, we have the scientific methods themselves for pinning the world down and forcing it to reveal its hidden nature. Scientists divide and conquer: they break the world into parts to study simplified sections of the natural plenum and to test their hypotheses; that is, scientists isolate a process and control for what are assumed to be irrelevant background properties, and they check whether a suspected causal connection obtains. Causal relations are thus broken down into mechanisms, which are systems made of parts that work together to form larger systems, and a scientific theory lays out some of these interrelations. And so scientists famously bypass the evolutionary limits of our thinking and seem to let the world speak for itself, one fragment at a time.





But science is still a dialogue with the world, not a monologue. A scientific theory is partly a model which offers a simplified representation of some process, like a blueprint, and partly an algorithm for generating a predicted result. The Newtonian paradigm of explanation assumes some initial conditions and relevant forces, and uses equations to deduce how the system evolves to reach a certain state. In addition to both the model for separating the relevant from the irrelevant properties and the mathematical apparatus for measuring a phenomenon, there’s a tradition of interpreting the theory, of translating it into commonsense terms, using metaphors to bring our intuitions and everyday experience to bear. This kind of interpretation is a concession to subjectivity. Quantum mechanics shows us the difference between these aspects of explanation, since in that theory our ability to quantify takes precedence over our interest in assimilating the phenomenon.





In science, then, objectivity is paramount, but even scientific objectivity requires human labour which impinges on what scientists are trying to understand. The world isn’t really divided into mechanisms. Mechanistic models are simplifications made necessary by our inability to observe, let alone understand, everything at once. What scientists explain directly is how artificially-isolated parts of the world behave, and scientists infer that the observed patterns hold more generally, on the condition that the irrelevant properties—those in which we’re less interested—don’t interfere too much with the interesting ones. This is to say that scientific conclusions are probabilistic, because the procedures by which scientists achieve their objectivity perturb the phenomenon they’re trying to understand, and a natural setting can only approximate the artificial one they isolate. Again, this comes to a head in quantum mechanics, with the Uncertainty Principle, but all of science is similarly limited. As in quantum mechanics, it’s as though the scientist were using a microscope to observe a miniscule specimen on a slide, but she’s unable to use the instrument without placing part of her finger on the slide and obstructing her view. When scientists control for irrelevant variables, the scientists get in their way through their manipulations, but without that labour there would be no piecemeal learning about nature and thus no scientific objectivity.





So objectivity isn’t the suspension of our distinguishing features to let the world speak for itself; instead, it’s a synergy between us and the world. We ignore parts of ourselves and parts of the frontiers of experience, to learn to overpower a fragment of the undivided and thus inconceivable wilderness that is the whole universe. Our reliable generalizations, mathematical idealizations, simplifying models, and divisive experiments are tools that have to modify a natural process to extract the facts we want. Those tools engage with the world as we find it and by doing so they effectively brand the facts that they uncover, which ought to remind us of the sophisticated work we inevitably perform as we try to understand nature without the dubious aid of commonsense or of our evolved biases. We can learn about the real world, the one that doesn’t depend on our interpretations, but only by objectifying reality, which means breaking it down into manageable parts and using intermediary processes as tools to divine the world’s contours. Even when the pristine cosmos follows logical patterns and behaves as predicted by our theories, those fruits of objectivity are grown in part by the seeds we sow. Objectivity isn’t just a straightjacket for our personal preferences, but is a trick we play on Mother Nature to force her to reveal her secrets. As with any magic trick, we employ a hidden apparatus, but it’s a poor magician who—like her audience—comes to lose sight of the tools of her trade.





Why aren’t more scientists nihilistic?





We should think of subjectivity and objectivity as processes of humanization and of objectification . In a social context, we reinforce the uniqueness of our personalities and the partialities of human nature, making use of our tools and machines according to the functions we set for them, and when we encounter the untouched wilderness, we humanize it by transforming it into an artificial world that flatters us by making us feel central. By contrast, when we rationally inquire into the nature of something, we hedge it in, ignoring its concrete uniqueness and going over its head, as it were, with the guidelines we project onto the world. We isolate, manipulate, or dissect the object to judge where it belongs in the natural order. And when we encounter a subject rather than an object, we dehumanize it, using the objective mindsets of economics, government bureaucracy, market research, public relations, the military, the sex industry, and so forth.





Why, though, do the most objective people tend not to be cynical nihilists, despite their command of objectification which counteracts their instinct to humanize the world? To be sure, some elites in the more objective industries may well be nihilistic, but whereas scientists are the most famously objective persons, it turns out that in the U.S. at least, scientists are much more liberal than the average American. Far from worrying about the likelihood that cognitive scientists will thoroughly dehumanize us as they come to understand how exactly the mind fits into nature, American scientists are boldly progressive, holding to values that assume our dignity as persons. How can this be? The philosopher Alex Rosenberg , who calls himself a nihilist, wonders why scientists aren’t more up-front with the public about the implications of their naturalistic worldview. He offers a number of institutional reasons, such as the fact that scientists are trained to be cautious and they don’t want to alarm people and lose the public funding for their research. But these sorts of reasons don’t explain why scientists themselves retain commonsense notions of subjectivity and morality. Shouldn’t the suicide rate be relatively high among scientists? How, instead, do scientists so easily leave aside their objectifications as soon as they exit their laboratory? How can they be so adept at both humanization and objectification? Shouldn’t expertise in the latter make us embarrassed to engage in the former?





For a telling example, take the super-intelligent character Sheldon Cooper from the comedy TV show The Big Bang Theory. Why do the millions of fans of that show love to laugh at Sheldon? Clearly, because the combination of his autism and his genius for objectification makes him childlike and so despite his godlike intelligence, he can’t fit into adult society. But who are the viewers fooling? When we objectify, our consciousness, moral worth, and other personal qualities seem to disappear; they pass straight through our net of abstractions and quantifications. For centuries now, scientists have been undermining commonsense notions. Thus, it looks like objects are more real than subjects, like personhood is only an illusion and the universe is just the block of impersonal forces and chunks of matter that scientists have come to understand. If that’s a reasonable fear for scientists who spend their professional life confirming the universe’s materiality, given their methodological naturalism, the question is why more scientists aren't as impersonal and socially awkward as Sheldon Cooper. Isn’t the self-indulgent subjectivity of their private life hypocritical, given the undeniable power of their collective scientific work?



