When last we tackled the topic of brain training software, the prognosis did not look good. Although proponents of this software claim that it results in a general boost in mental performance, detailed testing failed to show this general effect, and for some topics, the software was bested by a trivia quiz. Now, a new study has revisited the topic, with its authors finding that some brain training can boost general performance—but only a specific type of exercise, and only among a subset of users.

The study builds off a well-established relationship between general reasoning and spatial memory. The ability to perform abstract reasoning and solve problems you've never seen before is termed fluid intelligence. It's not clear what provides the basic mental horsepower for this ability, but a number of studies have shown that performance in tests that stress fluid intelligence is related to a testee's working memory, which stores basic information for use without committing it to long-term memory. Working memory, for example, is where you hold intermediate sums when you're adding a large column of numbers.

Although we don't know whether it's possible to improve general reasoning, some studies have indicated that it is possible to boost working memory by taxing the system. So the authors created a set of simple games that emphasized working memory, and set a group of nine-year-old children on them.

These games forced the children to solve what the authors term "n-back" problems. In one example, the children were shown a pond in which a frog would appear at random on a number of lilly pads. As the frog vanished and reappeared, the kids would have to recall where it was had been previously. So, for example, the third time the frog showed up, the children would need to remember where it had been the first time. Continued exposure to these games should boost working memory performance. And, on average, it did, with scores improving over time.

On its own, however, this seemed to have a very limited impact on the performance of the children when they were given a test of fluid intelligence, with no statistically significant trend in performance. By this measure, brain training had failed.

But it hadn't, at least not entirely. The authors noted that the children who had undergone training saw variable boosts to their working memory, so they split the trained children into high- and low-improvement groups and reran the numbers. Now, a significant effect appeared: fluid intelligence improvements had occurred among the children who saw the biggest changes in working memory. Their lead over their control peers (who had played a vocabulary-focused game) persisted even after three months, although it shrank a bit over that time. In contrast, the ones who saw little improvement in working memory lagged both their trained peers and the control population.

"Furthermore, there was a significant positive correlation between improvement on the training task and improvement on [fluid intelligence]," the authors note, "suggesting that the greater the training gain, the greater the transfer."

What drove the difference between the two groups? The authors asked the children how they felt about the game, and found that both groups considered it enjoyable. But the ones who saw a boost considered it a fun challenge, while the ones who improved less tended to find it far too difficult, and ended up frustrated by it. Separating out cause and effect there would seem to be a nightmare—were they frustrated because they simply couldn't keep up with something beyond their abilities, or did their abilities not ramp up because of a general lack of interest?

In any case, it's important to emphasize that the authors tracked the improvements in working memory and fluid intelligence, not the absolute values. The high-improvement group ended up statistically no better off than their peers when it was all over. In fact, the kids who began with the highest fluid intelligence scores started off with higher working-memory scores, but ended up seeing less improvement with training. Thus, on some levels, it appears that training is simply leveling the playing field.

In the end, the study seems to have shown that brain training can work, but whether it will or not is highly sensitive to the type of training and the individual being trained. Those are pretty significant limitations, and we could probably benefit from having a better sense of what the limits of training are. As such, the author's conclusion—"Future research should not investigate whether brain training works" (emphasis theirs)—seems a bit premature. Yes, we should investigate the factors that influence how well it works, as the authors propose. But we could also benefit from learning more about when it's going to be effective at all.

PNAS, 2011. DOI: 10.1073/pnas.1103228108 (About DOIs).