When former English professor and university administrator William Chace attended a liberal arts college in the 1950s, he discovered “English as a way of understanding the world.” In that social world, a career as a “Joycean” could lead to a university presidency (two, in Chace’s case). Since then, English as a discipline has declined, along with its cultural authority. The number of bachelor’s degrees in the field fell another 25 percent in the last decade, while health professions, engineering, and business majors grew 48 percent. In a society that devalues intellectuals while assuming technology will solve its problems, the role of an English PhD is precarious at best. In a crisis (which we’re always in), who wants to turn to an English professor, much less be one?

Of course, history and most social sciences are hemorrhaging college majors as well. It turns out that the same tendency to beat up on Wolf for misinterpreting history can also lead people to scoff at the expertise of historians and many social scientists, who (in addition to sharing many of the political proclivities of English departments) are after all reading and writing words, which almost anyone can do. Perhaps we’d better stick together. And broadening ourselves, working in multiple lanes, is far more likely to be a strength in this new world: as the imperative for public engagement by academics increases, retreating behind disciplinary walls is the wrong direction in which to move.

Wolf’s debacle raises a challenge to that perspective, though, posing the question: When is a writer erudite, a renaissance person, a polymath—and when are they merely trespassing superficially into areas of knowledge they haven’t mastered, imposing their own prejudices or yanking cherry-picked tidbits out of context? In preparing to write this essay, I had to wrestle myself away from the archive of London’s Central Criminal Court, where I imagined I could find some historical tidbit that would crack the case wide open—so much more immediately gratifying than reading a dense history book by someone who actually understands the subject.

I should know better. In my own field a simple, literal reading of the law has led to a common and profound misunderstanding of a different topic: divorce trends. Many people associate the historical rise in divorce rates with the introduction of no-fault divorce laws, starting with California’s in 1970 and spreading across most of the country within a decade. That makes sense, because those laws were being debated while the divorce rate was obviously rising, and critics argued that the new laws would lead to more divorces. But the story that emerged—that changing laws changed families—rested on a false assumption about how divorce law worked already. In fact, in the century before 1970, the annual divorce rate had already increased more than ten-fold, from 3 for every 10,000 people to 3.5 for every 1,000, before peaking at 5.3 per 1,000. When historians dug into the records, they discovered that there hadn’t been a ten-fold increase in adultery, abandonment, or abuse—the “faults” that justified divorce. Rather, for decades couples had been conspiring with family lawyers to fabricate just the right amount of fault to persuade the courts to grant them what were essentially no-fault divorces. Like the Victorian law that commanded execution of men who committed homosexual acts, divorce law didn’t work.