In 1959, Dr. Julian Lasky decided to conduct an experiment: How well could psychiatrists and hospital staff at a V.A. general-medicine and surgical hospital use individual patient interviews to predict post-hospital adjustments among their psychiatric patients? Once a month over a period of six months, Lasky gathered predictions on factors such as rehospitalization, work, family, and health adjustment. He then correlated those predictions, along with a number of other possible predictive factors, with actual readjustment success. He discovered something striking: the single strongest predictor of a patient’s adjustment success was the weight of his case file. The heftier the file, the less likely a patient was to successfully readjust to life outside of the hospital. File weight significantly predicted every single outcome criterion—from the patient’s ability to hold a job to his capacity for carrying on a successful, long-term romantic relationship—more accurately than monthly interviews, as well as other behavioral and self-report measures. And in the case of some factors, such as the chances of rehospitalization, the correlation was remarkably high. The natural conclusion was that the best predictor of future behavior is past behavior.

In some ways, not much has changed since those early days of clinical diagnosis. The director of the National Institute of Mental Health, Thomas Insel, announced last week that the institute would be officially reorienting its research agenda away from the categories in the soon-to-be-published fifth edition of the Diagnostic and Statistical Manual of Mental Disorders and toward a new set, the Research Domain Criteria (R.D.O.C.): “Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever.” In other words, we are still relying on the subjective assessments that lost out to the weight of the case file over half a century ago.

Insel’s point echoes a growing disconnect between the D.S.M. and the current state of psychological research and knowledge. When the D.S.M. was originally published, in 1952, its aim was largely statistical: How could we collect information about mental health? While the manual attempted to provide a clinically useful approach, it was hampered to a large extent by the dearth of accurate measures; as Walter Mischel points out in his 1968 book “Personality and Assessment,” almost every known tool offered a pitifully small correlation to actual behavior. In 1980, the D.S.M.-III began to incorporate a more methodical approach. For the first time, it included explicit diagnostic criteria coupled with an approach that strove toward descriptive neutrality. Diagnoses were based to a large extent on clinical observations and patients’ self-reported symptoms (gathered through structured, standardized interviews). To date, these remain the main points of diagnosis and assessment. (Gary Greenberg also charts the evolution of the D.S.M. and its impact on the nature of mental disease.)

In the intervening decades, however, we’ve developed psychological, biological, physiological, and neuroscientific techniques that have given psychologists unprecedented insight into the mind—advances that the D.S.M.-5 largely ignores. With the R.D.O.C., the N.I.M.H. is now trying to address the growing disconnect between reality—what we now know about mental disorders—and theory. The psychologist Kevin Ochsner, who served on one of the groups convened to advise on the new schema, said, “What was remarkable about this work group … was that the core N.I.M.H. staff explicitly guided us not to use current ways of defining clinical disorders when defining core constructs.”

As a result, the R.D.O.C. looks strikingly different from the D.S.M. For the most part, the D.S.M. presents discrete categories, and the R.D.O.C. offers a continuum that ranges from the normal to the abnormal. These constructs, which the N.I.M.H. defines as concepts that “summariz[e] data about a specified functional dimension of behavior,” are grouped into domains, such as motivation, cognition, and social behavior, and are explored through a range of tools, beginning at the genetic level and including observed behavior. It’s not that the methods of the D.S.M. are thrown out entirely—self-reports and clinical assessments are still considered—but that they are incorporated into a larger frame that relies far more heavily on empirically derived methodology. Where the D.S.M.-5 uses only clinical observations and self-reports, we now have inputs from genetics and from molecular, cellular, and systems neuroscience. We are, as Insel says, moving past the nature of the chest pain and toward the underlying causes.

Perhaps most important is the fluidity of this new conception of mental health: it is meant as a starting point of classification that will evolve along with the methodology and the findings. For instance, in a study of emotion regulation, the experimenter could categorize his work as belonging to both the positive and negative valence constructs, along with any of the cognitive-systems domains, such as attention or cognitive (effortful) control. She doesn’t need to confine herself to a single disease or diagnostic category—and can even choose to add new dimensions as the work evolves. She could also choose one or many of the units of analysis to explore her work: genes, molecules, cells, circuits, physiology, behavior, self-reports, or the broad category “paradigms” for other methods of behavioral evaluation that may not fit neatly anywhere else. And should new methodology become available? It can simply be added to the matrix.

The classification can thus cut across categories and be used to study underlying constructs that may apply to multiple disorders. They, along with units of analysis (or the methods used) can be mixed and matched depending on what the study finds and how the understanding of the research and results develops. Researchers are not constrained by a monolithic entity, such as depression, that they must then explicitly address in their research agenda, regardless of what the data is telling them. As the N.I.M.H. explicitly states, “We expect these [domains and constructs] to change dynamically with input from the field, and as future research is conducted.”

That sort of dynamism is almost entirely absent from the D.S.M.: not only was the last overhaul almost twenty years ago, in 1994, but the changes between that 1994 version and its 2013 counterpart, however controversial they may have been, are minimal at best. At a time when our understanding of the brain evolves on a nearly constant basis, can we still afford to be tied to a book that changes once every few decades—and refuses to reconceptualize itself in any meaningful fashion?

When the D.S.M.’s approach was conceived, we had to base categorizations on broad observations and one level of data: behavioral. That’s demonstrably no longer the case. In fact, we now know that behavioral data is often at odds with other inputs. Just as a reported pain in the arm can be a radiating effect of a heart condition, a reported psychological problem, like difficultly concentrating, actually be a symptom of an underlying biological or physiological condition. The science has now outgrown the original approach to the point where following such a symptom-based path may undermine the D.S.M.’s original intent. With the introduction of the R.D.O.C., Insel and the N.I.M.H. are trying to ensure that the D.S.M.’s accomplishments evolve with the times, instead of being left behind in a clinical vacuum that hurts research as much as it hurts patients.

Illustration by Joost Swarte.

Maria Konnikova is the author of the New York Times best-seller “Mastermind: How to Think Like Sherlock Holmes” and received her PhD in Psychology from Columbia University.