The map of science, as measured by the flow of manuscripts, is an efficient and highly-structured network, a new study reports. Three-quarters of articles are published on their first submission attempt; the rest cascade predictably to journals with a marginally lower impact factors. On average, articles that were rejected by another journal tend to perform better — in terms of citation impact — than articles published on their first submission attempt.

The article, “Flows of Research Manuscripts Among Scientific Journals Reveal Hidden Submission Patterns,” was published online last Thursday in the journal Science by French ecologist Vincent Calcagno and others.

Calcagno’s approach to studying the journal system is entirely novel. Starting with articles that were published between 2006 and 2008 in 16 fields of biology (encompassing 923 scientific journals), the researchers surveyed corresponding authors on whether their article was rejected by another journal prior to being accepted. Retrieving the names of the prior journals in the submission chain allowed the researchers to create a huge network map showing the directional flow of manuscripts through the journal system. Not surprisingly, Science and Nature form the center of that map.

The researchers only requested the preceding journal in the chain of rejection so it was impossible to calculate the average number of times an article was rejected before ultimately finding publication. Starting from published articles also precludes the researchers from studying submissions that were ultimately abandoned after serial rejection. Still, they report that 75% of survey respondents indicated that their articles were published on their first submission attempt. Based on the math, average submission pathways are likely to be short, indicating that the journal system as a whole is working efficiently. When rejection did happen, authors selected journals with marginally lower impact factors, with few exceptions. Taken together, these findings imply that authors generally target the appropriate outlet for their submissions and are risk-averse when it comes to resubmission.

While resubmission costs authors time and effort, it also comes with real benefits. Articles previously rejected by another journal received significantly more citations than articles published on their first submission attempt. Calcagno interprets this finding as evidence that the peer-review process is doing its job. Indeed, two large surveys on peer review (Sense about Science (2009), and Mark Ware/PRC (2007)) both indicate that scientists overwhelming agree that the peer-review process improves the quality of their work. Proponents of the publish-first-review-later model argue that it is better to produce more publications than improve the quality of one’s work.

James Evans, a University of Chicago sociologist of science, suggests an additional explanation for the improved citation impact findings. As quoted in The Scientist:

“Papers that are more likely to contend against the status quo are more likely to find an opponent in the review system”—and thus be rejected—“but those papers are also more likely to have an impact on people across the system,” earning them more citations when finally published.

Scientists cite the work of others for various reasons not all of which are considered valid. In contrast, the decision of where to submit one’s manuscript — and, if rejected, where to resubmit — is based on careful and deliberate decision-making. Within any field, researchers are keenly aware of the pecking order of journals. Publication, after all, makes or breaks careers.

Calcagno proposes using his analytic measures to develop a new journal ranking index based on the flow of manuscripts. Such an index, he argues, would more closely reflect authors’ perceptions of journal quality. Yet, their findings suggest that collective submission choices essentially explains the same underlying phenomenon as collective citation behavior, so it is not clear that such a new index would provide any additional information. Authors may be making blind submission choices based entirely on the impact factor of the journal. To me, this study suggests that the impact factor is a reliable indicator of the pecking order of scientific journals.