Editor’s Note: Today’s post is by Mark Edington. Mark is the founding director of the Amherst College Press and the publisher of Lever Press, two initiatives to build a pathway for peer-reviewed, digitally native scholarship from a liberal arts perspective through a platinum open access model.

In a number of recent posts in The Scholarly Kitchen (TSK), contributors have offered a variety of perspectives on the practice of peer review. Just this year, Robert Harington has pointed out the need for publishers to ensure that reviews generated by independent referees offer something of value to authors, while at the same time seeing to it that reviewers receive some form of recognition for their work. Tim Vines has discussed how the efforts of authors to reassert control over the peer review process risks underestimating the critical roles played by editorial professionals in the publishing process, and has drawn attention to a poorly sourced article on peer review that itself managed to show up the weakness in the implementation of peer review. And David Crotty explored the question of including the findings and arguments of non-peer reviewed materials, available through preprint repositories, in work published through formal, peer-reviewed processes. Indeed, a search through TSK’s archives turns up no fewer than 43 separate blog posts that include “peer review” in the title.

Readers of the Kitchen hardly need reminding that peer review, as both a practice and a matter of reputational concern for scholarly presses, stands near the center of what we mean when we say “scholarly publishing.” It would be reasonable to conclude — given the centrality of the practice to the core claims scholarly publishing makes to distinctiveness and value — that there would be some clear set of standards, some agreed-upon set of definitions, for how this critical undertaking is performed.

This might be all the more expected in view of the weight placed on peer review in underwriting the claim to authority that scholarly publishers make as to the unique value of what they set in the hands of readers. As long ago as 2000, this notion was at the very center of an effort by leaders in academe, research libraries, publishing, and learned societies to set out some navigational aids for the charting through the waters of change in scholarly communication. Their “Principles of Emerging Systems of Scholarly Publishing” — also known as the “Tempe Principles” — argued, inter alia, that,

…the system of scholarly publication must continue to include processes for evaluating the quality of scholarly work[,] and every publication should provide the reader with information about the evaluation the work has undergone.

The first clause in this phrase is uncontroversial; it essentially says peer review is part of what scholarly publishing does. But the second, seemingly anodyne, is anything but. In what way, exactly, do publishers “provide the reader with information about the evaluation the work has undergone”? At least in the world of scholarly monographs, there are no clear or consistent systems for performing this simple function across presses. Indeed, while giving an assurance that a press conducts systematic peer review is a condition of membership in the Association of University Presses, and while two years ago the Association took a significant step in publishing a “Best Practices for Peer Review” document offering guidance to editors and editorial boards, the association itself sets no standards or minimum requirements for what peer review means.

It is somewhat perplexing that a practice both central to our claim to distinctive authority as publishers, and implemented by all of us, does not have clearer, more public standards — or a way of sharing with readers how those standards have been applied. This seems like a first-order problem, especially in a moment in which the value of scholarship itself — and the knowledge it sets forth — are increasingly relativized or simply dismissed.

In view of this, I have been thinking — in close collaboration with Amy Brand of the MIT Press — about what specific steps might be taken to achieve greater transparency in the practice of peer review; both how the various forms of review, both traditional and emerging, could be defined and how they could be communicated in simple, clear ways with readers. A first and critical inspiration for this work was the realization by my colleagues on the editorial board of the newly launched Lever Press that the best way of addressing the reputational challenge facing open access presses — the widespread but ungrounded notion that there is some iron law of nature linking open access as an outcome to a poor peer review process — was to state in a public way both the understanding we have of peer review processes, and a commitment to disclosing, in each title we publish, which process we have implemented. Not surprisingly we have been inspired and guided by the work of Creative Commons, which has succeeded in equipping creators with new tools to specify and tailor the rights they are willing to share with a simple system of symbols (or “buttons”) each linked to a plain-language document, a more comprehensive license, and a bit of machine-readable code to help cataloging systems identify and share those rights.

With generous support from the Open Societies Foundation and the American Academy of Arts and Sciences, we convened a gathering of stakeholders in Cambridge, in January of this year to share our work and explore a variety of the questions that shape the current conversation about the place, conduct, and labor of peer review. Attendees included colleagues from the worlds of research libraries and scholarly publishing, learned societies, researchers with a focus on scholarly communication, and technology innovators working to create preprint repositories, establish systematic means of assigning credit for the labor of writing reviews, and provide the systems of metadata that enable the greater development and discovery of the scholarly record.

We created a report summarizing both our preparatory week and the conversations of our gathering and shared it back with our colleagues for comment and annotation using PubPub, MIT’s open publishing platform. And with the help of contributions and fruitful insights from these colleagues, we’ve now developed a final report of the work.

The report suggests that, while scholarly publishers differ in many ways — institutional affiliation, audience, business model, editorial process, for example — the fact that we all share a commitment to peer review is a signally important common link that distinguishes what we publish from the work of all other publishers. This distinction, on which we base our claim to the value and authority of what we set before readers, is not well served by being a “black box” phenomenon, effectively making our titles blind items (think of mattresses) with the marks of quality unseen inside. (The characteristic of being a blind item — in which claims of quality are based on unseen (and invisible) characteristics — is one of a number of ways in which scholarly monographs have come to behave, from an economic standpoint, like luxury goods, which are often in this category.) It is time, we suggest, for publishers to join in an effort to articulate just what is meant by widely used but imprecise labels like “blind,” “open,” and various descriptive qualifiers.

The report also proposes that we can and should do much better at disclosing to readers both how we apply various forms of review — made clear on the basis of definitions established and promulgated in a public way — and the object to which the review is being applied (e.g., a manuscript, a proposal, a dataset, etc.). Following the inspiration of Creative Commons, we think that this can be accomplished by the development and consistent implementation of a system of symbols, or icons, that convey both the kind of review undertaken and the focus of that review. That said — and, again, instructed by the Creative Commons example — it is by no means enough to create and use a clever system of symbols to disclose the peer review unless those symbols are in some way tied to, and grounded upon, a set of definitions that are understood to be commonly held by principal stakeholders in the system; publishers, yes, but also librarians, learned societies who act as publishers, and of course authors.

As greater emphasis is placed on discoverability and the relationships between authors, reviewers, and ideas, we will only be able to create greater consistency in the conduct of peer review and greater transparency in disclosing by ensuring that our metadata include information about what forms of review were conducted on a given published object (and, where appropriate, links to the reviews themselves). Existing systems for creating means of persistent identification for scholarly objects — notably digital object identifiers (DOIs) and identifiers for researchers, such as ORCID iDs — should be seen as critical to enabling greater transparency in peer review. As such, they should be tapped as participants in helping to design the systems to achieve this transparency.

We are under no illusions that our work offers a fully developed or implementable solution. But by sharing our efforts and the insights of our colleagues, we hope the other stakeholders will now take leadership roles in creating conversations among their own constituents. We know there are many important questions that our suggestions do not address — the question of how and whether reviews of preprints could use such a scheme, how it would relate to the work publishers do (since they would not, by and large, want to provide warrants for a review system they did not in some way oversee), and how to assure that such a system would not be taken advantage of by unscrupulous actors.

Having found so many colleagues who agreed with our sense that the time has come to find a better way of sharing with readers a clearer account of what we do as scholarly publishers, and who had so many good ideas for how to achieve this, we now want to share our report more widely. We hope that it will catalyze conversation and collaboration among all who share an interest in assuring that the distinctive qualities of scholarly publishing continue to offer clear and meaningful contributions in the exchange of ideas, even in the midst of what seems a moment of discourse ungrounded in facts and thought.