The lure of computer drills like Brain Age is hard to resist: play a reasonably compelling set of games for a few hours a week, and train your brain to perform better, much in the same way that hitting a gym works for athletes. And the experience is generally great—for most users, performance numbers do improve. But a question has lingered over this genre of software like a dark cloud: do these tests actually improve general mental performance, or are you just learning to perform a limited set of drills better? To get an answer, a group of researchers in the UK crowdsourced the audience of a science-themed TV show, and set them to work training their brains. Among that crowd, performance on the specific training exercises shot up, but there was little spillover into even closely related skills.

Because the impact of brain training might be small, the research team involved needed a large population of test subjects. So they turned to the audience of the TV show Bang Goes The Theory, and asked them to visit a website to participate in the study. Over 50,000 people responded, and over 11,000 of them actually completed the study. At the start, participants were given a set of tasks that measured their baseline skills in four areas of mental function: reasoning, verbal short-term memory, spatial working memory, and paired-association learning.

The participants were then split into three groups. One was asked to log on three times a week and complete a set of drills similar to those seen in brain-training software, including tests of short-term memory, attention, mathematics, and spatial processing. A second experimental group performed tasks that emphasized more general reasoning skills, like planning and problem-solving quizzes. The control group wasn't trained at all; they were simply asked to find the answers to trivia questions, and were allowed to use online resources to do so. After six weeks, the benchmark tests were repeated.

As expected, when it came to the actual tasks that were being trained on, the test scores shot up. For the two experimental groups, the effects were substantial. When increases were compared to a baseline of 1.0, the magnitude of the improvements ranged from 0.67 all the way up to 1.63. Even the control group got better at handling trivia questions.

But outside the specific areas of training, things weren't so rosy. Both experimental groups saw improvements in their test scores over the six-week period, but the authors describe the changes as ranging from "small" to "very small" (for the latter category, the 95 percent confidence interval generally overlapped with zero, indicating no improvement). Worse still, the control group saw similar improvements—those answering trivia questions actually outperformed the second experimental group by improving in a greater number of categories. "These results provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults," the authors conclude.

The authors spend a large portion of the paper considering whether their study design is sufficiently similar to the procedures used in brain-training software to draw general conclusions. Not surprisingly, they argue yes. The areas they chose to work with overlap with the ones used by software, and their drills did result in significant improvements. At the same time, their tests of performance are very sensitive, since they're used diagnostically to check for signs of mental decline in patients with neurological illnesses. The tests can also pick up subtle changes induced by small doses of drugs.

The paper wraps up with an attempt to put things in perspective. Over the six weeks involved, the memory drills enabled people to improve handling numbers by a total of three percent. "Assuming a linear relationship between time spent training and improvement, it would take almost four years of training to remember one extra digit," the authors argue. And, more depressingly, the trivia control group improved by two percent.

The one aspect that doesn't come up in the discussion is the fact that some people may have jobs or hobbies that rely specifically on the skills that could be improved, like spatial reasoning. Although it's obviously beyond the scope of this work, it would seem to be worth testing whether improved scores on these tests would translate to improved performance on real-world activities that incorporate elements of the training.

In any case, if you find brain training enjoyable, there's certainly no reason to stop, although you should be realistic about your expectations. And, if you like Trivial Pursuit, it's probably nice to know that it's just as effective.

Nature, 2010. DOI: 10.1038/nature09042 (About DOIs).