Some science news regarding cognitive abilities made the rounds yesterday, appearing on Slashdot, which based its coverage on the BBC. The coverage gets the gist of the story right—new research suggests some cognitive declines associated with age begin in the late 20s—but it completely misses the larger point: the paper involved is part of a larger argument about the appropriate method of studying cognitive abilities. And the most startling thing about that is that the argument played out across five other papers in the same issue of the journal.

An academic dogfight

To be clear, nobody is debating whether cognitive decline occurs with age; the evidence is pretty persuasive, and it comes from a variety of sources. The argument involves when that decline begins, and two different experimental approaches have produced very different results.

Longitudinal studies (LS) involve testing the cognitive abilities of the same group of individuals at time periods as long as a decade apart. These tend to show that most people increase their performance on the tests as they age, at least until they reach their 60s, at which point declines become evident. The alternative approach uses what is called a cross-sectional study (CSS) in which groups of individuals at different ages are tested once and the results are compared across groups. These show a far earlier decline in mental capacity, which has more than few of the older Ars staffers checking their noggins for signs of decline.

Obviously, both approaches to research have drawbacks. In CSS, you're not tracking changes within an individual, which are arguably the most relevant, and the results are at the mercy of the researcher's ability to control for differences between the groups in things like educational history. There's even some argument over whether these studies can ever overcome generational issues, like changes in educational practice and nutrition.

Meanwhile, an LS exposes its subjects to similar tests (even if they are years apart), which might provide some "practice effect." We also tend to test those cognitive skills that we consider most relevant, which means that real life might provide the cohort with what could be termed inadvertent practice.

All of that sets the stage for the current paper. In it, author Timothy Salthouse builds a case that CSS provides a clearer picture of cognitive decline, one that's consistent with the finding that changes in brain structure begin relatively early in life; it also matches animal studies which show early cognitive decline even when environments are held constant. Salthouse then engages in a CSS, testing a number of cognitive abilities while attempting to control for cohort differences. Salthouse also used a subset of his group to explore the impact of practice by repeating the test at various time intervals.

The graphs of the test results show a strong pattern, with declines starting pretty much right after the earliest tests given to individuals in their early 20s. Retest effects were apparent in the results from those studies, leading the author to conclude that the data "converge on a conclusion that some aspects of age-related cognitive decline begin in healthy educated adults when they are in their 20s and 30s." Thus, the BBC headline, "'Brain decline' begins at age 27," appears justified.

Not an open-and-shut case

Or not. A quick look at the table of contents for that issue of the journal (Neurobiology of Aging) shows that the paper is what the journal terms an "Open Peer Commentary Manuscript." At least six peers chose to comment in a series of three articles; the initial author then responded to them with a second article of his own. Reading through these, it quickly becomes clear that the field considers the issue anything but settled.

The arguments come fast and furious. A long-time practitioner of LS simply states that formal reasons why CSS cannot be used to infer changes in individuals were described over 40 years ago, and the new paper hasn't changed that situation. Another group that engages in LS describes the extensive literature regarding appropriate controls for both types of experiments; based on the experimental methods of the initial paper, they can't tell whether Salthouse used these or not. This response also argues that the citations of studies involving brain volume changes and animal studies are selective; other work shows volume increases in some structures in early adulthood, and so on.

But the most interesting complaint seems to focus on the treatment of cognitive function as a unitary item, accessible through standardized tests. One critic points out that our ability to learn new languages declines rapidly around the age of seven, but we don't consider this a sign of mental decline. Relevant to Salthouse's new work, the critics emphasize that individual tests show some significant variations in what are ostensibly tests of the same underlying capacity. For example, one test of reasoning is largely stable throughout the 35 year range in test subject age, while tests of spatial reasoning show an extended period of stability from the 30s through the 50s.

The same thing goes for tests of the practice effect. The results vary wildly between different tests, as does the change in the practice effect with time; in some tests, it's effectively nonexistent.

For his part, Salthouse says that he recognized many of these issues in his initial publication, which is true. Even in the conclusions quoted above, he was careful to use the phrase "some aspects," rather than making a blanket statement about cognitive abilities. Nevertheless, it's difficult to read his first paper without getting the impression that Salthouse thought his approach produced the most biologically relevant data.

In any case, the papers make it clear that the debate over cognitive decline is not settled by the latest results. If there is anything approaching a consensus in this field, it's that the questions themselves are extremely complicated, and the answers you get very much depend on what you ask.

Neurobiology of Aging, 2009. Volume 30, Issue 4, Open Peer Commentary