The lifecycle of scholarly articles across fields of economic research

Sebastian Galiani, Ramiro Gálvez, Maria Victoria Anauati

Counting citations is the mainstream approach to judging a paper’s academic impact. This column summarises research into the citation lifecycle of economic papers and how it differs for papers classed as applied, applied theory, theory, or econometric methods. There is a clear-cut lifecycle for economics articles. More importantly for professional evaluation, the cycle differs markedly across fields.

Citation counts stand as the de facto methodology for measuring the influence of scholarly articles in today’s economics profession. Nevertheless, a great deal of criticism has been made of the practice of naively using citation analysis to compare the impact of scholarly articles without taking into account other factors which may affect citation patterns (see Bornmann and Daniel 2008).

One recurrent criticism focuses on ‘field-dependent factors’, which refers to the fact that citation practices vary from one area of science to another (with the focus generally being on differences in citation practices between hard science and the social sciences).

Citation counts as a technique for measuring scholarly research importance

In a recent paper (Anauati et al. 2015), we analyse if the ‘field-dependent factors’ critique is also valid for fields of research inside economics. Our approach began by assigning into one of four fields of economic research (applied, applied theory, econometric methods and theory) every paper published in the top five economics journals – The American Economic Review, Econometrica, the Journal of Political Economy, The Quarterly Journal of Economics, and The Review of Economic Studies.

The sample consisted of 9,672 articles published in the top five journals between 1970 and 2000. It did not include notes, comments, announcements or American Economic Review Papers and Proceedings issues. The criteria used to assign a paper to a category are as follows:

Applied papers are papers that have an empirical or applied motivation.

They rely on the use of econometric or statistical studies as a basis for analysing empirical data, although they may deal with simple models that serve as a theoretical framework for the analysis.

Applied theory papers develop theoretical models to explain a fact.

The empirical analysis is not the most important feature of the paper, but a supplement.

Econometric method papers are articles that develop econometric or statistical methodologies.

Theoretical papers do not contain an empirical fact section.

These papers usually approach a topic by modelling and by making extensive use of formal mathematics and logic.

Figure 1 plots trends in the appearance of papers dealing with different fields of research in every journal and in all the top five journals as a group. The patterns that emerge are quite interesting. In particular, it is notable how applied papers have grown in importance since the beginning of the 1990s, whereas theory papers have done just the opposite. This shift has been particularly sharp in the case of the QJE, where applied papers have risen to prominence since the mid-1990s while edging out theory papers.

Figure 1. Trend in the share of articles by journal and field of research

Trends in citation counts

Once we assigned every paper to one field or research, we collected from Google Scholar detailed data on citations of each article for every year since its publication. Figure 2 plots the number of citations the average paper in each field of research received per year since publication.

Figure 3 reproduces the information provided in Figure 2 but based on median citation counts instead. For the purposes of this analysis, in both figures we grouped publication dates into five-year periods in order to reduce effects related to ‘time-dependent factors’ (see Bornmann and Daniel 2008). The observed difference in levels between both figures illustrates the well-documented and strong asymmetry observed in citations patterns across articles, where a few have a large number of citations and the vast majority receive much less.

Figure 2. Mean citations received per year since publication by papers across fields of research and publication dates grouped into five-year periods

Note: Mean citations are smoothed using five-year centred moving averages. The y-axis scales vary across sub-figures.

Figure 3. Median citations received per year since publication by papers across fields of research and publication dates grouped into five-year periods

Notes: Median citations are smoothed using five-year centred moving averages. The y-axis scales vary across sub-figures.

Some interesting patterns also emerge here. First, as can be seen from the variation in the y-axis ranges across panels in each figure, citations for papers in 1995-1999 are drastically more numerous than they are for 1970-1975, and this increase holds steady across all five-year periods. Following Neff and Olden (2010), but using the term more broadly, we refer to this phenomenon as ‘citation inflation’.

Second, both figures indicate that, while for the period 1970-1974, differences across fields of research do not seem to be very large, as time has passed and the number of citations has risen, differences across fields have started to appear. Moreover, an analysis of the trends in citations shows that the first half of the 1980s was clearly the time of successful econometric method papers, while the 1990s (especially the last half) was the decade of successful applied and applied theory papers.

The lifecycle of scholarly articles across fields of economic research

As the curves plotted in Figures 2 and 3 may be constantly ascending due to citation inflation, in Anauati et al. (2015) we present a methodology based on quantile regression for identifying the life cycle of papers across fields of research. The technique presents the advantage of controlling for citation inflation and of allowing for the analysis across different (conditional) levels of success of papers. Figure 4 shows the results of estimating the lifecycle of economics papers across fields of research with our methodology.

Figure 4. The lifecycle of papers obtained by regression analysis

Note: Estimated curves are smoothed using five-year centred moving averages. The y-axis scales vary across sub-figures. The sample consists of 9,672 articles published in the top five journals between 1970 and 2000 – it does not include notes, comments, announcements or American Economic Research Papers and Proceedings issues. In the figure, ‘OLS’ stands for ordinary least squares, ‘QR’ stands for quantile regression and τ for the quantile of the distribution of the error term, which we associate to different levels of success.

As can be seen, economic research articles effectively have a lifecycle. For almost every estimated curve, they begin their life with a low number of citations per year. That number then rises over a given period of time until it reaches a peak. Thereafter, papers begin to decline in importance as measured by yearly citation counts. Median papers reach their peak between around three and five years after their publication. Ten years after being published, the median paper for every field of research receives negligible levels of citations per year.

Focusing on the differences across fields of research, it can be seen that theoretical papers are, in general, cited the least often (a feature also exhibited in Figures 2 and 3) and the performance of econometric method papers in this respect is almost identical to the performance of theoretical papers. One interesting feature of econometrics method papers for τ = 0.85 is that these papers age relatively well – from 15 years since publication onward, their citation levels behave almost the same way as those of applied papers, even though applied papers reach a much higher citation peak than econometric method papers do. An almost non-descending curve for econometric method papers is observed for τ = 0.95. This suggests that the lifecycles of econometric method papers are very heterogeneous across quantiles – most papers have a modest life cycle, but the most successful ones are exceptional not only in terms of their own field of research but also in relation to economics papers as a whole. However, since publishing an extremely successful paper is by no means an easy task, evidence shows that theory and econometric research are the fields that benefit less from using citation counts naively.

On the opposite, applied and applied theory papers are the clear winners – during their first years of life following publication, they receive higher numbers of citations than the papers in the other categories (it should be taken into account that these years are the ones that matter for journals' impact factor calculations), they reach a higher peak (more than twice as high as the peak for theoretical papers), and that peak level seems to last longer.

Lastly, our results goes hand in hand with Card’s and DellaVigna’s (2013) documented exceptional performance of the Quarterly Journal of Economics during the 1990s – it is plausible that this may be attributable to the fact that, during that period of time, the Quarterly Journal of Economics shifted its focus from mainly theoretical articles to applied papers (see Figure 1) and this translated directly into a higher number of citations.

Conclusions

Even though citation counts are an extremely valuable tool for measuring the importance of academic articles, the patterns observed for the lifecycles of papers across fields of economic research support the ‘field-dependent factors’ inside this discipline. Evidence seems to provide a basis for a caveat regarding the use of citation counts as a ‘one-size-fits-all’ yardstick to measure research outcomes in economics across fields of research, as the incentives generated by their use can be detrimental for fields of research which effectively generate valuable (but perhaps more specialised) knowledge, not only in economics but in other disciplines as well.

According to our findings, pure theoretical economic research is the clear loser in terms of citation counts. Therefore, if specialised journals' impact factors are calculated solely on the basis of citations during the first years after an article’s publication, then theoretical research will clearly not be attractive to departments, universities or journals that are trying to improve their rankings or to researchers who use their citation records when applying for better university positions or for grants. The opposite is true for applied papers and applied theory papers – these fields of research are the outright winners when citation counts are used as a measurement of articles' importance, and their citation patterns over time are highly attractive for all concerned. Econometric method papers are a special case; their citation patterns vary a great deal across different levels of success.

References

Anauati, V, S Galiani, and R Gálvez (2015), “Quantifying the life cycle of scholarly articles across fields of economic research”, Economic Inquiry, forthcoming, preprint version available at http://dx.doi.org/10.2139/ssrn.2523078.

Bornmann, L, and Daniel, H D (2008), “What do citation counts measure? A review of studies on citing behaviour”. Journal of Documentation 64(1): 45-80.

Neff, B D, and Olden, J D (2010), “Not so fast: inflation in impact factors contributes to apparent improvements in journal quality”, BioScience 60(6): 455-459.

Card, D E, and S DellaVigna (2013), “Nine facts about top journals in economics”, Journal of Economic Literature 51(1): 144-161. VoxEU’s column based on this article available at http://www.voxeu.org/article/nine-facts-about-top-journals-economics.