Released on Wednesday afternoon (or to be precise, at exactly 1:10 pm EDT), Thomson Reuters published its 2012 Journal Citation Report (JCR)–an annual collection of citation metrics on the performance of scholarly journals. While the JCR contains many statistics, most publishers and editors await a single metric–the Journal Impact Factor.

The Journal Impact Factor (JIF) is a single statistic which summarizes the average citation performance of articles published between two and three years ago within a given journal. For example, the 2012 JIF for journal X is the sum of all 2012 citations made to 2010 and 2011 articles divided by total number of articles published in 2010 and 2011. This is the simple explanation. Those interested in the mechanics of identifying and classifying what goes into the numerator and denominator of this equation are encouraged to read (Hubbard and McVeigh, 2011 and McVeigh and Mann, 2009)

I’m not going to touch on the weaknesses of the JIF or its misuse by academia. For some editors–especially those whose journal performs poorly in terms of citation metrics, or for those with little to lose–criticizing the JIF has become a regular pastime. Every month, dozens of editorials are written on this topic and the season for high critique comes just following the publication of the Journal Citation Report in mid-June.

While many scholars and editors eschew the notion of attributing the success of individual articles by the prominence of the journal, scientific authors continue to place great importance on the Journal Impact Factor in their decisions on where to submit their manuscripts. Open access authors are no different. Many institutions around the world have adopted a direct compensation model that reward authors based on the Impact Factor of the journal in which they publish, an incentive that locks many of the world’s authors into this unidimensional measure of journal performance.

Despite its founders seeming disdain for the JIF, when PLOS ONE received its first JIF for 2009 (4.351), authors of the world responded by flooding the journal with manuscript submissions. Several months later (see figure below), you can see the effect on the number of articles published in PLOS ONE. In 2012, PLOS ONE published 23,464 articles, making it the largest journal this world has ever witnessed. Editors of biomedical journals with comparable JIFs could feel the gravity of PLOS ONE dragging down their own flow of manuscript submissions. The following year, PLOS ONE received its second JIF (4.411). We have witnessed its rise, now prepare for its fall.

The fall of PLOS ONE‘s JIF is to be expected if you look at how the Journal Impact Factor is calculated and the structure and editorial policy of the journal. Since the JIF is a backwards-looking metric, based on the performance of articles in their second and third year of publication and then reported six months later, the effect of new submission strategies takes several years to have an effect on the JIF.

For example, when PLOS ONE‘s 2009 JIF was reported in June 2010, it was reporting on the performance of articles published in 2007 and 2008, when the journal was still very young, small, had the backing of prominent scientists, and more importantly, had no JIF. The effect of the flood of manuscripts that were submitted in the third quarter of 2010 would not be felt for another two years.

Similarly, when the 2010 JIF was reported in June 2011, it was reporting on the performance of articles published in the pre-JIF era (2008 and 2009). The first detection of the wave of post-JIF submissions was first only reported last June, 2011 for articles published in 2009 and 2010. The 2011 JIF dropped by 7% from the previous year partially as a result. (Note: Journals that are growing quickly tend to have depressed JIFs). The 2012 JIF is the first citation metric to encompass a full year of articles published in the post-JIF era. Based on the performance of articles published in 2012, next year’s (2013) JIF will likely decline further.

The reporting delay of the Journal Citation Reports is responsible, in part, for the boom and bust we are witnessing in PLOS ONE‘s JIF. The other component that is fueling the decline in this particular metric the editorial policy of the journal.

In smaller journals that base acceptance in part on novelty and significance, a downward spiral can be thwarted by concerted efforts of the editors to attract high-impact articles and reviews and by preventing perceived low-impact articles from being accepted. In the absence of perfect information on how any article will perform, the unpredictability of individual article performance can also shove a journal out of a downward spiral.

For very large journals that publish thousands (or tens of thousands) of articles per year, any one highly-performing article has almost zero influence on the JIF. These journals are playing a large numbers game that makes them completely insensitive to the performance of individual star articles: they operate entirely on the bulk flow performances of the article market. When PLOS ONE was still young and small, submissions by influential researchers had a real effect on the performance of the journal. If these researchers are still publishing with PLOS ONE–and have not defected to other journals promising revolutionary new forms of high-impact open access scientific publishing, like eLife–their effect is drowned out by the new author population that followed them to PLOS ONE.

The second reason why PLOS ONE‘s editorial policy will not be able to correct their declining JIF is based on the differential citation patterns of the biomedical literature coupled with the preference of authors to be published in a journal that meets (or exceeds) the citation potential of their own articles. Journals in certain biomedical fields (e.g. cancer research) tend to have much higher JIFs than other fields (e.g. plant taxonomy). In this example, plant taxonomists would be better off publishing in PLOS ONE–as the JIF would likely be higher than any plant taxonomy journal–and cancer researchers would be better off publishing in a disciplinary journal, where the JIF would not be depressed by the publication of other fields of biology. When operating in a journal performance market (rather than an article performance market), multidisciplinary journals reward the underperformers and punish the overachievers.

Other multidisciplinary journals (e.g. Science, Nature, PNAS) run into the same problem where the citation dynamics of one field (e.g. planetary science) drags down the performance of others (e.g. oncology). However, these journals have strict editorial guidelines that select manuscripts from the best of each field and reject the rest. These journals also have Editors in Chief, who, like captains of their own ship, are entrusted to develop the focus and direction of their journal. If the journal is publishing too much in one field, or not enough in another, the editorial policy can be changed. PLOS ONE has no Editor in Chief and as far as I can tell from their editorial and publishing policies, contain no targets or limits on how many articles are published in any one field. If disciplinary journals are like small cruisers with captains on deck, PLOS ONE is, by design, more like a barge without a captain, no engine room, left to the direction of the currents.

This deliberate design, along with the vision and dedication of its founders has served PLOS ONE well, allowing it to be wildly successful. Yet, the structure of the journal was founded on scale and efficiency, and under these conditions, its founders have very little control over the direction of their ship.

The open access landscape of 2013 is radically different than the landscape of 2007. Not only do other multidisciplinary open access journals in the biomedical sciences now compete with PLOS ONE, many specialized titles do as well, offering fast publication, methods-based review, and an array of post-publication metrics. Journals like F1000Research even publish before review. PLOS ONE now competes with for-profit publishers operating journals that are not required to create surpluses and may run indefinitely even at a loss.

The shape of the PLoS ONE publications curve suggests that the journal may be reaching its asymptote–its maximum publication output based on the ability of authors to generate manuscripts. However, this forecasting assumes some equilibrium in the market. Unless PLOS ONE has cultivated a strong and loyal group of authors, a decline in their 2012 Impact Factor will likely signal the year when authors (at least for whom the JIF is an important factor) turn away from the megajournal and return (as some hope) to a discipline-based model of publishing.