Journals have long served as the dominant mechanism by which scholars have made priority claims and have disseminated their work to each other and to the public. Over the course of the last few decades, these communication vehicles have been increasingly metricized, as one aspect of a general effort to measure, rank, and rate the quality of research. When the Journal Citation Reports was first published as an addendum to the 1975 Science Citation Index, it suggested using aggregated citation metrics – as indicators of journal use – to inform librarians in collection management decisions, while the detailed citation network data could aid scholars at the intersection of fields to find publication venues. Over time a single indicator from the JCR, the Journal Impact Factor (JIF) moved out of the library context, and has increasingly been applied not only to assess journals, but also to assess and to predict the performance of documents and associated authors within journals. This extensive misapplication of JIF and other journal indicators has spurred increased scrutiny of the construction and use of journal metrics, evidenced by the growing number of declarations and manifestos asking either for these indicators to be used responsibly, or for them to be eliminated from processes of research assessment.



To contribute to the conversation around the construction and use of journal indicators, we convened a diverse group of stakeholders—metric providers, funders, evaluation agencies, administrators, publishers, editors, librarians, scientometricians, researchers, and readers/users of scholarly publications—for a one-week workshop hosted jointly by the Lorentz Center, the Centre for Science and Technology Studies at Leiden University (CWTS), and Clarivate Analytics. The workshop, entitled “Rethinking Impact Factors: New Pathways in Journal Metrics” focused on indicators for scholarly journals. It was not restricted to citation-based indicators, but examined a broad array of current and potential indicators at the journal level.



Our discussion began with an enumeration of the scholarly functions of journals. This provides a foundation on which to build a typology of indicators grounded in these functions, which in turn is based on the belief that journal indicators should serve to strengthen and reinforce the primary functions of the journal. At present, citation-based indicators are the most widely used and promoted journal indicators; we contend that broadening the scope of journal indicators to match the multi-dimensional roles of journals should lead to a more responsible evaluation framework, in which values such as diversity, transparency, and reliability are considered an integral part of scholarly research and impact. Further, we provide principles for the construction and the use of journal indicators. These principles are essential to ensure that a proliferation of metrics does not distort the scholarly communication system, but leads to more granular and transparent assessments. We sought to relate ideals of fairness and openness in journal indicators to the practical realities of those who generate and use them. We end with a set of recommendations and further actions.