On a broader scale, ImPACT, headquartered in Pittsburgh, sells tests and training to more than 7,000 pro teams, colleges, high schools and sports medicine centers from the University of Alabama to St. John's College in Zimbabwe. And it's picking up corporate partners and athlete endorsements. In the past year, Dick's Sporting Goods and Wells Fargo have announced initiatives to encourage widespread use of the test. In one ad touting Dick's promise to help fund ImPACT testing of up to 1 million middle school and high school athletes, former Steeler Jerome Bettis says, "Using tools created by ImPACT young athletes will know when to sit out."

There's just one problem. Many scientists who are unaffiliated with ImPACT don't think the thing works.

"Through amazing marketing, the ImPACT guys have made their name synonymous with testing," says William Barr, an associate professor of neurology and psychiatry at New York University and former team neuropsychologist for the New York Jets. "But there's a growing awareness that ImPACT doesn't have the science behind it to do what it claims it does."

Mark Lovell, the CEO of ImPACT Applications, Inc., the company that makes and markets the test, said in a statement to The Mag that "ImPACT has become popular because it has been extensively researched," noting that it appears in more than 110 publications. "Concussion is increasingly recognized as a very complicated and complex injury that is best dealt with using multiple modalities. ImPACT is not designed to be used 'in and of itself,' but rather as part of an overall strategy that includes a clinical evaluation by an expert, a vestibular evaluation (including visual processing and balance) and neurocognitive assessment (ImPACT)."

Yet a study -- really a study of studies -- published last year in Current Sports Medicine Reports reviewed the entire span of research on ImPACT and concluded: "[T]he false positive rate appears to be 30 percent to 40 percent of subjects of ImPACT the false negative rate may be comparable. The use of baseline neuropsychological testing is not likely to diminish risk, and to the extent that there is a risk associated with 'premature' return-to-play may even increase that risk."

Lester Mayers, who was once a captain in the Army Medical Corps and now describes himself as an "elderly clinician," retired from private practice almost 14 years ago to become director of sports medicine at Pace University. And while he had previously specialized in internal medicine and pulmonary diseases, Mayers quickly realized he would have to focus on brain injuries at Pace -- a Division II school with campuses in New York City and Westchester County -- where athletes were sustaining concussions at an alarming rate. "I've seen more than 100 concussions since I've been at Pace, and there's a mystery to them," he says. "It's frightening when your child or teammate gets a concussion, and you see an athlete in never-never land."

So Mayers hit the books, scouring research journals to make sure his program was using the latest and best practices to manage brain injuries in athletes. He knew that Lovell, a neuropsychologist, and neurosurgeon Joseph Maroon had originally developed ImPACT in the early 1990s, and that together with Michael Collins, currently the director of the Sports Concussion Program at the University of Pittsburgh Medical Center, they launched a business to make the test commercially available. But Mayers became puzzled, then disturbed, by what he found when he started to dig deeper.

"Neuropsychological testing seemed to be becoming the fallback for the people involved in football to say, 'We're doing something about concussions,'" he says. "And through skillful marketing, ImPACT was giving the public the sense that if the pros use it, it's got to be right. But I went on PubMed once a week for more than a dozen years, and I kept finding papers about ImPACT published by the same people -- the people who run the company."

As Mayers found, the vast majority of the studies evaluating ImPACT have been written by the very researchers who developed it. On the "Reliability & Validity" section of ImPACT's website, for example, 21 of the 22 research papers listed are authored or co-authored by ImPACT's inventors.

Elite sports neuropsychology is a small, densely interconnected world. Many scientists wear multiple hats as they conduct research, work with teams and invest in or partner with businesses related to their work -- all different, potentially conflicting ways of making money. And in the case of ImPACT, the people who created the test have used their various platforms to popularize it, all the while maintaining a financial interest in it.

While developing and promoting ImPACT, for example, Lovell oversaw neuropsychological testing for the NFL. That meant he was directing the NFL's testing at the same time he was chairman of a company selling tests to the league's teams. Lovell also sat on the league's concussions committee and served as director of UPMC's Sports Concussion Program, as director of the NHL's neuropsychology program and as a consultant to the Steelers. (Today, he works full-time as CEO of ImPACT Application Inc. but remains a consultant to the NFL, NHL and several other organizations.) Maroon was also on the NFL's concussions committee and still serves as an adviser to the league and the Steelers' team neurosurgeon.

These overlapping roles have sometimes led ImPACT's executives into dubious, industry-funded research. Lovell is a co-author of the notorious 2004 paper in which the NFL's concussions committee found there was "no evidence of worsening injury or chronic cumulative effects" from multiple concussions in NFL players. And Collins, Lovell and Maroon cowrote a 2006 paper that found the Riddell Revolution helmet reduced the relative risk of concussions in high school football players by 31 percent. Riddell has trumpeted that research ever since. But the helmet maker had given grant money to UPMC, its vice president of research and development cowrote the paper, and reviewers blasted the work, using phrases such as "serious, if not fatal, methodological flaw" and "substantial conflict of interest."

Atlanta-area children try out the ImPACT baseline concussion software at the PACE Protecting Athletes through Concussion Education event at Dicks Sporting Goods last year. Paul Abell/AP Images for Dick's Sporting Goods

Moreover, Lovell and other scientists affiliated with ImPACT have often failed to identify their potential conflicts of interest when publishing research. In 2007, an ESPN.com investigation found that "on at least seven occasions since 2003, Lovell has authored or co-authored studies on neuropsychological testing, including papers directly evaluating ImPACT, without disclosing his roles in creating and marketing ImPACT." In one case, the journal Brain Injury strengthened its conflict-of-interest policy after getting a complaint about Lovell's work. In another, Maroon and Collins reviewed a paper Lovell wrote for the journal Neurosurgery without ever disclosing their roles as fellow corporate officers at ImPACT Applications.

"It is a major conflict of interest," says Christopher Randolph, professor of neurology at Loyola University Medical Center near Chicago and former team neuropsychologist for the Bears. "The people looking at this test tend to be the same few who are invested in it. What if they're offering something that turns out to not be important? What if their research results are incompatible with their perception of their product?"

Mayers wondered the same thing. He watched ImPACT's inventors publish paper after paper affirming the test's accuracy, countered occasionally by detractors. And at first, as an outsider in the twilight of his career, he had no stake in the fight. But he had trouble finding prospective, controlled studies of ImPACT's, well, impact. Even today, no research project has taken a set of concussed athletes and a tightly matched control group -- healthy athletes with a similar mix of ages, genders and playing history -- and used the current version of ImPACT over a long period of time to see what the test picks up among each set. Instead, researchers have largely focused on how subjects with very recent concussions perform on ImPACT compared to their baselines. That's much less valuable information because if athletes are still suffering from concussion symptoms, teams don't need computerized analysis to hold them out.

For any test like ImPACT to be useful, it has to detect problems with brain function when they actually occur. But research shows the test has a disturbingly high rate of false positives. For example, in a 2007 study in the Journal of Athletic Training, researchers conducted baseline tests on a group of college students, all of whom had been concussion-free for at least six months. When these injury-free subjects took follow-up tests 45 days later, ImPACT incorrectly identified them as having some element of a concussion 38 percent of the time. In research published a year earlier, a team including Lovell and Collins compared ImPACT tests on 122 high school and college athletes who had concussions with 70 who did not. They found that two days after injuries, 64 percent of the concussed athletes reported worse symptoms, whereas 93 percent had either symptoms or abnormal test results, so ImPACT seemed to improve concussion detection by 29 percentage points. (And that's what the authors of the study wrote.) But 30 percent of the control group also registered problems two days later. When you know in advance that members of a particular group don't actually have concussions, you can dismiss their test results. But in real life, it's hard to trust any test that registers something incorrectly one-third of the time. "They tried to show ImPACT tests add value beyond what you get from evaluating symptoms, but it's a mess," says Barr.