We are entering a new age of transparency and openness in science. New scientific practices that would have been unthinkable to most of us even a decade ago are now becoming commonplace. One of my recently completed projects was fully preregistered on the Open Science Framework website, complete with predictions, reasons for possible exclusion of data, the analytic techniques to be used, and so forth. Well, yes, I am fourth author on the project and one of my recent PhD students, Adam Putnam, did all the work, but I will still bask in being part of the new wave in science.

Even though I have not been at the forefront of writing about all the new practices in science, I followed along from my perch as chair of the APS Publications Committee. (I stepped from that position a year ago, once Advances in Methods and Practices in Psychological Science (AMPPS) had been established.) I was edified by the various articles and e-mails I received, and then by the collection of blog posts and tweets forwarded by others, about the pros and cons of the new practices. I think the concept of “open science” and its transparent practices have a strong toehold in our field, at least, and are gaining momentum in all of science. The Center for Open Science (and its Open Science Framework) is one of many exciting developments. Transparent practices seem here to stay.

With one glaring exception: Transparency in publication practices. Some journals, such as the Journal of Educational Psychology, have initiated a “masked review policy, which means that the identities of both authors and reviewers are masked. Authors should make every effort to see that the manuscript itself contains no clues to their identities” (from the website). Other journals do the same. This procedure can present a problem for those people with a sustained record of research on the topic of the manuscript. Do you leave out self-citations from the references? I have seen that happen with a citation of “Author, 2011,” but of course that can itself be a clue to identity. Also, this practice of masking the authors conflicts with the idea from the open science movement of posting one’s paper for comments (free reviews) on a website before submission to a journal. Other journals permit authors to submit anonymously but do not require it, and other models are possible. I am not sure if the practice of anonymous submission is increasing, and I cannot seem to find data on the issue.

Should Reviews Be Signed? What About Action Letters?

Once a paper arrives in the editor’s office, it is either triaged (see, especially, Psychological Science in our field) or sent out to review. Most reviewers choose to be anonymous. I don’t, and I know other cognitive psychologists who sign their reviews, too, but I have been told that the practice is rare in other disciplines.

Why did I change? I edited a journal in the 1980s and became used to signing my action letters, so I saw no reason to change that practice for reviewing. I thought, and still think, that signing encourages me to write more thoughtful and respectful reviews. Of course, the practice leaves me open to receiving critical responses from recipients of my reviews. A year ago, I reviewed a paper on an old issue in the psychology of memory that did not cite relevant research, so I took a few paragraphs to provide a tutorial review that I thought might be helpful. One of the authors wrote to me and the action editor to say that he found the tone of review offensive; in particular, he found my review “condescending.” I wrote back an apology and said I thought I was being “educational.” But I went back to my review and, sure enough, the reviewer had a point regarding the tone of the review. In my defense, I was annoyed at reviewing a paper on an issue (not even one that I studied) by authors who showed little appreciation of the literature. The hazard of signing reviews is having your reviews reviewed, but that’s fine with me. Transparency. Why snipe at others from behind a rock?

I recently was asked to serve as an editor for two papers for the Proceedings of the National Academy of Sciences (PNAS). Authors are identified to the editor when they submit papers. The editor-in-chief (or maybe a senior staff person) assigns it to a more specialized associate editor. If the paper is not triaged at these early stages (50% are), the associate editor asks someone more specialized (me, in this case) to serve as action editor for the paper. In the most recent case, I chose several reviewers, and rather quickly the reviews came back. PNAS does not permit identification of reviewers to authors, but they are put on a tight deadline — 10 days — for submission of reviews. I had read the paper, so when the reviews came in, I read them a couple of times, read the paper again, and wrote an action letter.

I asked to see how the eventual package looked when it was returned to the submitting author. I found what I had been told to expect: The entire set of information came from PNAS, but neither the reviewers nor I were identified. From the authors’ perspective, some shadowy presence emerging from PNAS had made pronouncements about the publishability of their paper. In my experience, this takes anonymity to a new level, but perhaps this practice is common in some fields of science. If the paper is eventually accepted and appears in PNAS, I will be identified in a footnote as the action editor who handled it.

I wondered why there has been so little discussion of anonymity in submission and reviewing in the new transparency movement, so I wrote to several friends who have been more deeply involved in the open science movement, and I asked them. Had I just missed the relevant articles? I was told that their entire community is having heated debates about the merits and demerits of transparency in submission and reviewing, but more on Twitter, blog posts, and the like that I don’t read. Let me consider some of the issues, even if briefly.

Anonymous Submissions

Concerning submissions, the argument is that anonymous submission (assuming it works) aids researchers who are starting out, who are not at the most well-known universities, who may be from another country, and so on. Making submissions anonymous may give such investigators a shot at a fairer process than they might otherwise receive. I think this is a reasonable argument, but there are counterarguments. For one, many reviewers really bend over backward to help young researchers or ones who are not native English speakers, especially if they see a reasonably good paper that needs some reshaping. If the reviewer does not know who submitted the paper, she or he might just write a short negative review without trying to be particularly helpful. Also, sometimes knowing the author might make a difference. Suppose a paper arrives in the editor’s inbox and its message is that several experiments have provided devastating rebuttal of Snerdley’s important theory of something-or-other that he has been pushing for years. It might be worth knowing if Snerdley, rather than Snerdley’s long-time critic, is the author.

Yet the bias can go in the other direction. A famous researcher may get a mediocre paper accepted simply based on reputation, as if the logic is, “Oh, it’s a paper by X, so it must be a good paper.” This may be less likely to occur with anonymous review — except that, of course, the editor knows who the author is and is the one making the decision about publishability. I have heard of cases in which, when a paper was triaged, the editor gets a note that essentially says, “Don’t you know who I am?” And the answer is yes, and I just desk-rejected your paper.

Another issue, raised by a commentator on this column, is that anonymous submission may encourage authors to submit essentially rough drafts of their paper, thinking, essentially, that the reviewers will not know who they are, so why go through those extra two revisions to comb out all those small problems? The reviewers will do that. That is not fair to reviewers or the editor.

At any rate, I can see the issue of anonymous submission either way. Pros and cons exist, and as usual it depends on how one weights them. Researchers can vote with their feet (as it were) by choosing to submit or not submit to journals requiring them to make their papers anonymous.

Signing Reviews

I used to encourage people to sign reviews, but after numerous discussions, I’ve backed off. Good counterarguments exist. Signing represents a danger to young scholars who might be advising rejection of a paper of someone senior who will later be asked to write a reference letter for the reviewer’s tenure case. Or that senior person may later be editor and get even when the young scholar submits a paper. (Yes, we would like to think these things do not happen, but we know better.) That problem exists at the senior level, too. I do think signing reviews makes the reviewer read more carefully, think harder, and be more civil. Yes, when reviewers sign, perhaps they become too polite. One problem noted by editors is that a reviewer will write a lukewarm-to-warm review, but then in the checklist of recommendations and the private note to the editor, will say the paper should definitely be rejected. This makes the editor look like a jerk for rejecting the paper over slightly positive reviews. I try never to do that in writing reviews, and I usually do not write private comments to the editor; my review says what the editor needs to know. At any rate, I still always sign my reviews unless the journal prevents it, which some do. They take my name off, which is odd. One of my friends who also signs told me that he refuses to review any longer for a journal if they follow this practice.

In discussing the issue of signing reviews over the years, I have found some people who always sign, and some who at some point went from not signing to signing. However, I also discovered other people who used to sign reviews but now do not sign them, and they give good reasons. I have come to the conclusion that it is simply an individual choice. I wrote an earlier column about reviewing in which I provide 12 tips. Perhaps the most critical one is to have the goal of reviewing a paper using the same tone as if you were going to sign it and be identified. Also, never, ever choose to sign your positive reviews and not your negative ones!

The Editor’s Role

What about the editor? Is there any reason for an editor not to sign his or her name, other than not wanting to get pushback? Not that I know of. Psychological Science has begun the practice of putting the name of the action editor accepting the paper with the publication, which I think is a good practice. AMPPS will do the same. Other journals should follow suit, in my view. Some journals publish reviewers’ names, too, but that can be a fraught practice. If someone writes a negative review and the paper is accepted because of other positive reviews, the person’s name appears with the paper as if he or she endorsed it, too.

One interesting model comes from the BMJ, formerly the British Medical Journal, which has the most open publication practices I have found. Briefly, each article not triaged is considered by peer reviewers and several editors. Reviews are signed and are made public (with the authors’ responses to reviews) when the paper is published. All people are identified in the process (editors and reviewers are identified). This process takes transparency to a new level, one at the opposite end of the spectrum from PNAS.

The editor has a critical role in the whole process. The obvious part is that the editor makes the decision about publishability. The less obvious role is that the editor selects the reviewers. When I was associate editor and then editor of the Journal of Experimental Psychology: Learning, Memory and Cognition in the 1980s, I felt as if I could strongly bias the eventual decision on a paper just by selection of reviewers. Editors get to know that some reviewers dislike most every submission, and others have a positivity bias. Selection of fair reviewers is a critical step, and editors tell me that it is getting harder to get good reviewers (perhaps due to the proliferation of journals).

A Thought Experiment Realized

Years ago, around 1990, Endel Tulving and I were chatting in my office at Rice University, discussing the issue of anonymity in science, the desire to make scientific submission and review anonymous “for protection.” Endel proposed the thought experiment of having two types of journals. In the alternate universe of journals, authors would identify themselves to reviewers, reviewers would identify themselves to authors, and editors would of course identify themselves. These would be the set of journals for open, transparent editorial processes (although we may not have used those terms in 1990). He wondered if scientific progress might not be greater if we had this kind of transparency in science. The thought experiment was to set up journals of both types and see which one researchers would elect to use and which one would win in terms of people signing up for one or the other approach, for submissions, and for the discovery of new knowledge. But we agreed that time that we will never know the outcome.

Now I think we might. Journals in our field and across science are experimenting with various degrees of transparency in the editorial process. While consulting people in writing this column, I learned about various journals in numerous disciplines. On the one end, there is the BMJ model, though not yet employed by any psychology journals that I know of. (Collabra, the journal published by the Society for the Improvement of Psychological Science, has some of these features.) On the other end, there is the PNAS model. And we see (and will continue to see) journals experimenting with other kinds of practices, such as requiring that all submissions be vetted by being posted on a website. Some journals (as now) forbid it, whereas others might encourage it (even require it). In due course, over the decades, such experimentation may lead to new models of journal publishing. Which journals will receive the best submissions? What forms of publication will survive? I would like to bet on more open practices, but I am often wrong in my bets.