A decade ago, a young Swedish researcher named Torkel Klingberg made a spectacular discovery. He gave a group of children computer games designed to boost their memory, and, after weeks of play, the kids showed improvements not only in memory but in overall intellectual ability. Spending hours memorizing strings of digits and patterns of circles on a four-by-four grid had made the children smarter. The finding countered decades of psychological research that suggested training in one area (e.g., recalling numbers) could not bring benefits in other, unrelated areas (e.g., reasoning). The Klingberg experiment also hinted that intelligence, which psychologists considered essentially fixed, might be more mutable: that it was less like eye color and more like a muscle.

It seemed like a breakthrough, offering new approaches to education and help for people with A.D.H.D., traumatic brain injuries, and other ailments. In the years since, other, similar experiments yielded positive results, and Klingberg helped found a company, Cogmed, to commercialize the software globally. (Pearson, the British publishing juggernaut, purchased it in 2010.) Brain training has become a multi-million-dollar business, with companies like Lumosity, Jungle Memory, and CogniFit offering their own versions of neuroscience-you-can-use, and providing ambitious parents with new assignments for overworked but otherwise healthy children. The brain-training concept has made Klingberg a star, and he now enjoys a seat on an assembly that helps select the winners of the Nobel Prize in Physiology or Medicine. The field has become a staple of popular writing. Last year, the New York Times Magazine published a glowing profile of the young guns of brain training called “CAN YOU MAKE YOURSELF SMARTER?”

The answer, however, now appears to be a pretty firm no—at least, not through brain training. A pair of scientists in Europe recently gathered all of the best research—twenty-three investigations of memory training by teams around the world—and employed a standard statistical technique (called meta-analysis) to settle this controversial issue. The conclusion: the games may yield improvements in the narrow task being trained, but this does not transfer to broader skills like the ability to read or do arithmetic, or to other measures of intelligence. Playing the games makes you better at the games, in other words, but not at anything anyone might care about in real life.

Over at the Cogmed Web site, though, it looks like lives are being transformed. A beaming child sits at a desk, pencil in hand, next to a quote extolling the results at a private school in Jacksonville, Florida. Cogmed training is helpful for all ages, from “young children to senior adults,” but is of particular interest to people with “diagnosed attention deficits” or “brain injury,” or those who “feel the deteriorating effects of normal aging” or those who “find they’re not doing as well as they could, academically or professionally.” The training is a method to “effectively change the way the brain functions to perform at its maximum capacity.” Cogmed is operating in more than a thousand schools worldwide, more than a hundred of which are in the U.S. In January, Cogmed launched a major push into American schools, which it charges up to three hundred dollars per child.

Cogmed and the other companies stake their claims on “working memory,” the ability to keep information the focus of conscious attention, despite distractions—mental juggling, in other words. There is powerful, widely accepted evidence that working memory plays an important role in everything from reading ability and problem-solving to reasoning and learning new skills. (It also seems to help with musical sight-reading and proficiency at Texas hold ’em.) And problems with working memory play a role in A.D.H.D., which has become an American fixation. Working memory is also closely related to “executive function,” the brain’s ability to make a plan and stick with it, an active and fruitful area of psychology with broad social implications. Many psychologists consider working memory to be a core component of general intelligence. People who score highly on intelligence tests also tend to perform well on working-memory tests.

The experiments by Klingberg and others suggested that working memory could be markedly increased through training, the same way that sit-ups create stronger abs—and, more importantly, that the training could bring broad benefits, the way weight training can make a person a better all-around athlete. In Klingberg’s first experiment, published in 2002, he recruited students with A.D.H.D. and gave them Raven’s Progressive Matrices, a test of non-verbal reasoning that is used to measure intelligence. He then gave them regular working-memory workouts, increasing the difficulty of the games as they improved by giving them more to remember. At the end of several weeks of training, he reported, he gave the kids the Raven’s again, and they performed significantly better. He then found the same results in young adults without A.D.H.D. The studies were small, but gradually other psychologists entered the field, and, in 2008, the psychologist Susanne Jaeggi reported an even more electric result: working-memory training definitively increased intelligence, with more training bringing larger gains. Her data implied that a person could boost their I.Q. by a full point per hour of training.

Over the last year, however, the idea that working-memory training has broad benefits has crumbled. One group of psychologists, lead by a team at Georgia Tech, set out to replicate the Jaeggi findings, but with more careful controls and seventeen different cognitive-skills tests. Their subjects showed no evidence whatsoever for improvement in intelligence. They also identified a pattern of methodological problems with experiments showing positive results, like poor controls and a reliance on a single measure of cognitive improvement. This failed replication was recently published in one of psychology’s top journals, and another, by a group at Case Western Reserve University, has been published since.

The recent meta-analysis, led by Monica Melby-Lervåg, of the University of Oslo, and also published in a top journal, is even more damning. Some studies are more convincing than others, because they include more subjects and show a larger effect. Melby-Lervåg’s paper laboriously accounts for this, incorporating what Jaeggi, Klingberg, and everyone else had reported. The meta-analysis found that the training isn’t doing anyone much good. If anything, the scientific literature tends to overstate effects, because teams that find nothing tend not to publish their papers. (This is known as the “filedrawer” effect.) A null result from meta-analysis, published in a top journal, sends a shudder through the spine of all but the truest of believers. In the meantime, a separate paper by some of the Georgia Tech scientists looked specifically at Cogmed’s training, which has been subjected to more scientific scrutiny than any other program. “The claims made by Cogmed,” they wrote, “are largely unsubstantiated.”

In a conference call, several Cogmed executives told me that they did not accept the conclusions, saying that the various scientists had unfairly overlooked good evidence in support of Cogmed’s regimen. They cited, as one example, Melby-Lervåg’s decision to not consider brain-imaging studies, which they believe offer additional evidence of neurological improvements that take effect after people play their games. “There is a lot of research excluded, almost to the point where it seems like the research is designed to reach a particular conclusion,” said Travis Millman, vice-president and general manager of Cogmed.