Using IFs for such purposes also results in reasoning and argumentation fallacies. Policies that rely on journal-based metrics for evaluating scientific contributions are either guided by weak or invalid arguments or, in fact, consider the uncertainty about the quality of the science...

Eugene Garfield’s pioneering work on the journal impact factor (IF) fundamentally changed how institutions evaluate scientific quality. What was initially sought to help make decisions about journal subscriptions today influences academia far beyond the stocking of libraries. Scientists may receive grants, bonuses, and tenure depending on the perceived impact of the journals in which they publish their research. This practice has been widely criticized because the IF is a poor indicator for the number of citations of a specific journal article, and motivating scientists based on citation rates and journal IFs might have unwanted side effects , potentially jeopardizing the reliability and credibility of science.

The “denial of the antecedent” is one of the most well-known fallacies in deductive reasoning. Consider the following first premise: If you carry an umbrella, then you will stay dry. And the second premise: You do not carry an umbrella. If you assume both statements to be true, could you conclude that you do not stay dry? Based on the two premises alone this is not a valid conclusion: you could be carrying a raincoat, or it might be a sunny day.

What would you instead conclude if a candidate who is applying for a grant has not published in a high-IF journal, such as Science or Nature? Does this imply that the person’s research is not of high quality? If decision-makers base their choices on the following premises: If a paper is published in Nature or Science, then it is of high quality, and This paper is not published in Nature or Science, the conclusion that a publication is of low quality is an instance of the fallacy of denying the antecedent.

You could also assess the strength of the IF argument by means of inductive reasoning. Along these lines, some decision-makers consider a high IF an indicator of the scientific excellence of a given paper published within that journal. In the absence of the putative journal-based indicator of quality, to conclude that the paper is not of high quality is an instance of the “argument from ignorance.” A typical example of trying to justify a conclusion by pointing out that there is no evidence against it is: “No one has proven that ghosts do not exist. Therefore, ghosts exist.”

The weakness of the argument for the existence of ghosts is obvious. However, the same form of argument is quite common in the case of the journal IF, as in the example: This paper does not have the quality sign of having been published in a high-IF journal. Therefore, this paper is not of high quality. Here, the absence of a journal-based metric indicating high quality does not imply that an article is of low quality, especially given the different research fields and the many options available for publishing research.

Of course, academia is fast-moving; very often, quick decisions are deemed necessary. Thus, one may argue that it is better to have weak arguments than to have no arguments. Considering the deep implications of the decisions that rely on the journal IF, however, we have strong reservations regarding decision-makers’ blind trust in such journal-based metrics.

The world of scientific publishing is definitely more complex than can be expressed with a two-digit number. In the end, we still need to read, discuss, and try to understand papers before judging them.

Frieder Michel Paulus is a postdoc in the Social Neuroscience Lab at Lübeck University, where Sören Krach is a professor of psychiatry and psychotherapy. Nicole Cruz is a PhD candidate in psychological sciences at Birkbeck, University of London.

Editor’s note: Eugene Garfield was the founder of The Scientist.