Curiosity: How Science Became Interested in Everything, By Philip Ball, University of Chicago Press, 465 pp., $35

Brilliant Blunders: From Darwin to Einstein—Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe, By Mario Livio, Simon & Schuster, 341 pp., $26

Aristotle called it aimless and witless. St. Augustine condemned it as a disease. The ancient Greeks blamed it for Pandora’s unleashing destruction on the world. And one early Christian leader even pinned the fall of Lucifer himself on idle, intemperate, unrestrained curiosity.

Today, the exploration of new places and new ideas seems self-evidently a good thing. For much of human history, though, priests, politicians, and philosophers cast a suspicious eye on curious folks. It wasn’t just that staring at rainbows all day or pulling apart insects’ wings seemed weird, even childish. It also represented a colossal waste of time, which could be better spent building the economy or reading the Bible. Philip Ball explains in his thought-provoking new book, Curiosity, that only in the 1600s did society start to sanction (or at least tolerate) the pursuit of idle interests. And as much as any other factor, Ball argues, that shift led to the rise of modern science.

We normally think about the early opposition to science as simple religious bias. But “natural philosophy” (as science was then known) also faced serious philosophical objections, especially about the trustworthiness of the knowledge obtained. For instance, Galileo used a telescope to discover both the craters on our moon and the existence of moons orbiting Jupiter. These discoveries demonstrated, contra the ancient Greeks, that not all heavenly bodies were perfect spheres and that not all of them orbited Earth. Galileo’s conclusions, however, relied on a huge assumption—that his telescope provided a true picture of the heavens. How could he know, his critics protested, that optical instruments didn’t garble or distort as much as they revealed? It’s a valid point.

Another debate revolved around what now seems like an uncontroversial idea: that scientists should perform experiments. The sticking point was that experiments, almost by definition, explore nature under artificial conditions. But if you want to understand nature, shouldn’t the conditions be as natural as possible—free from human interference? Perhaps the results of experiments were no more reliable than testimony extracted from witnesses under torture.

Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.

Ultimately, Curiosity is a Big Ideas book. Although Newton, Galileo, and others play important roles, Ball wants to provide a comprehensive account of early natural philosophy, and that means delving into dozens of other, minor thinkers. In contrast, Mario Livio’s topsy-turvy book, Brilliant Blunders, focuses on Big Names in science history. It’s a telling difference that whereas Ball’s book, like a Russian novel, needs an appendix with a cast of characters, Livio’s characters usually go by one name—Darwin, Kelvin, Pauling, Hoyle, and Einstein.

Livio’s book is topsy-turvy because, rather than repeat the obvious—these were some smart dudes—he examines infamous mistakes they made. He also indulges in some not always convincing armchair psychology to determine how each man’s temperament made him prone to commit the errors he did.

For those of us who, when reading about such thinkers, can’t help but compare our own pitiful intellects with theirs, this focus on mistakes is both encouraging and discouraging. It’s encouraging because their mistakes remind us that they were fallible, full of the same blind spots and foibles we all have. It’s discouraging because, even at their dumbest, these scientists did incredible work. Indeed, Livio argues that their “brilliant blunders” ended up benefiting science overall.

Take Kelvin’s error. During William Thomson Kelvin’s heyday in the later 1800s, various groups of scientists had an enormous row over the age of Earth, in large part because Darwin’s theory of natural selection seemed to require eons upon eons of time. Unfortunately, geologists provided little clarity here: they could date fossils and rock strata only relatively, not absolutely, so their estimates varied wildly. Into this vacuum stepped Kelvin, a mathematical physicist who studied heat. Kelvin knew that Earth had probably been a hot, molten liquid in the past. So if he could determine Earth’s initial temperature, its current temperature, and its rate of cooling, he could calculate its age. His initial estimate was 20 million years.

For various reasons, Kelvin’s answer fell short by two orders of magnitude (the current estimate is 4.5 billion years). Worse, Kelvin used his calculation to bash Darwinism, a vendetta that ended up tarnishing his reputation. Nevertheless, his precisely quantified arguments forced geologists to sharpen their own work in order to rebut him, and eventually they too began to think of Earth as a mechanical system. A nemesis can bring out the best in people, and Kelvin’s mistake proved a net good for science.

Ball’s and Livio’s books help answer an important question: why bother reading science history? Scientists themselves, after all, are notoriously uninterested in the subject, probably for good reason. Science proceeds by discarding unworkable ideas, and every hour spent poring over arcane theories is time not spent refining your own experiments. But as Ball points out, old debates have a way of reemerging in modern guises. For instance, early objections to natural philosophy—the “unnatural” experiments, the prodigal waste of money on expensive toys—echo modern objections to, say, genetically modified food and the Large Hadron Collider.

Similarly, Livio shows how Einstein’s blunder has risen, phoenixlike, in recent years. In forming his theory of general relativity, Einstein added a so-called cosmological constant to one of his field equations: a repulsive force that countered gravity and (somewhat like air pressure) kept the universe from collapsing in upon itself. Einstein later struck the constant out, discarding it as ugly, ad hoc, and unnecessary. But two teams of scientists resurrected it in the 1990s to explain why our universe is expanding faster than we once realized. On cosmic scales, Einstein’s once-discarded constant may be the dominant force in the universe. (See how frustrating this is? Even when he was wrong, Einstein was right!)

Reading science history might not fix the bugs in your equipment or help you secure a new grant, but it can provide a larger perspective on what scientists do and why we need them. Science history doesn’t give all the answers, but it does help explain why we seek the answers in the first place.