What's wrong with the h-index, according to its inventor

"Severe unintended negative consequences."

Gemma Conroy

z_wei/Getty

Love it or hate it, the h-index has become one of the most widely used metrics in academia for measuring the productivity and impact of researchers. But when Jorge Hirsch proposed it as an objective measure of scientific achievement in 2005, he didn’t think it would be used outside theoretical physics.

“I wasn’t even sure whether to publish it or not,” says Hirsch, a physicist at the University of California, San Diego. “I did not expect it to have such a big impact.”

The metric takes into account both the number of papers a researcher has published and how many citations they receive. It has become a popular tool for assessing job candidates and grant applicants.

It is also one of the most contentious topics scientists discuss, as Hirsch writes in the Physics and Society newsletter in January.

“About half the scientific community loves the h-index and half hates it,” writes Hirsch. “The h-index of the scientist itself is a great predictor of whether s/he belongs to the first or the second group.”

While Hirsch believes that the h-index is still one of the best objective measures of scientific achievement, he also writes that it can “fail spectacularly and have severe unintended negative consequences.”

“I can understand how the sorcerer’s apprentice must have felt,” he writes.

One downside is that it can deter researchers from innovative thinking. For instance, a student working under a professor with a high h-index may be reluctant to question the concepts they are being taught, as they are likely to assume the professor is an expert in their field based on their score.

“You will drink the Kool-Aid, learn how to work with the formalism, and later teach it to your students, who will be equally reluctant to question it as you were, since your h-index by then will be substantial,” writes Hirsch.

The quest for a high h-index can also encourage researchers to choose ‘hot’ research topics that are more likely to gain attention and tempt them to publish one paper after another in an effort to boost their score. “It’s a little too sensitive to what’s popular and fashionable in science,” says Hirsch. The more a paper is cited, the harder it becomes to question its validity, he notes.

Hirsch points out that the metric doesn’t pick up on research that deviates from mainstream, something that he has observed in his own work on superconductivity.

“If you write a paper that that’s not generally accepted, it’s an uphill battle to get people to consider it,” says Hirsch. “But just because something is accepted, it doesn’t mean that it’s right.”

While Hirsch’s research on superconductivity spans 30 years, his papers on the subject contribute less than 10% to his total citations. Hirsch writes that these papers are “far more important than any other work I have done that has a lot of citations”, but their significance cannot by gauged using his h-index alone.

“If we believe citations and h-indices, by all counts my contributions to the understanding of superconductivity are insignificant,” writes Hirsch. “Therefore, I have to conclude much to my regret that the h-index fails in this case.”

Hirsch urges hiring committees and funding agencies to consider other aspects of a candidate’s career when making decisions, such as discipline, author position, and how many collaborators a researcher works with.

“One has to look at the nature of the work,” says Hirsch. “If you make decisions just based on someone’s h-index, you can end up hiring the wrong person or denying a grant to someone who is much more likely to do something important. It has to be used carefully.”

Tags: