As an editor of two decent journals (IRFA and FRL) I have the unenviable task of rejecting for publication many academic papers. Some I reject at my level – these are what are called desk rejects. I reject others after peer review. Why?

Here is not the place to debate the issues, and they are many, around peer review. Let us accept that for the immediate and medium term future this process is how it is, and will remain, for the upper echelon of journals (and yes, that in itself is a fuzzy and inchoate classification).

Between editing my own journals, chairing a medium size conference, and being an associate editor in other journals, I see over a thousand academic papers per annum. Very few are pure rubbish, from a surface perspective. By that I mean most all show evidence of considerable work, of organization and marshalling of arguments, of data analysis or theoretical development, of thought and effort. And yet we reject many, in the journals, most.

Rejection stings. The more papers you publish, and I have 100 or more, the more used you get to rejection. But still it stings. How dare those bozos in X decline to publish! The correct reaction is to take the rejection, fume and whine FOR A SHORT WHILE and then put it away. In a few days, when your emotions have cooled and the bruises begun to fade, look again. In every case there will be something you can take from the rejection – a data concern to address, a theoretical weakness to shore, a model to consider, a literature and argument to incorporate. You might not agree with them all but in the context of a discourse, and that is what a paper is, part of a communication process, you need to be aware of and address the counterarguments

So why do editors do this? We are conscious of careers and incomes riding on our decisions to accept or reject, we know that we sometimes accept poor and reject good papers (hopefully those errors are small but they do exist), and we know that we want to tread a careful line between formative and summative judgement.

In my readings and experience, and thinking on this I have come across a set of common factors. These, and my own perspectives, I outline below, in absolutely no order of importance,

Clarity: the paper needs to tell us what it is doing. If there are a host of good ideas all crowding each other out then in the confines of the space available in a modern journal article this is going to present a problem. Without getting into salami slicing, where a host of papers are created from one base, each differing only minutely form each other, the rubric of “One for One”, one major idea per paper, is one to live by. That way you can present a tightly argued, clear, organised paper. The other ideas go in other papers. Ask yourself – is this tightly coherent.

Fit: It still astonishes me how often I see papers that are simply not within the aims and scope of the journal. How hard can it be to check have similar papers been published in the last few years, to read the journal homepage, to perhaps even email the editor or an associate editor? Again, this is not to say that journals shouldn’t, and perhaps even have a responsibility to, go outside the box a little, but sending a theory paper to an empirical journal, or a paper on international trade to one focusing on corporate finance suggests sloppy preparation and a lack of clarity. Check if it fits.

Contribution: I mentioned salami slicing. In empirical papers this most often appears where one or two variables or approaches are changed and a new paper produced. Thus one paper uses one methodology and another a similar, with essentially the same explanatory variable set. Really this is down to referees and editors, where the dreaded “robustness checks” are required to be shown. Let the reader feel they learned something

Triviality: Some things, if not known (can anything be known, really, in social science?) are well accepted. A paper that demonstrates already well attributed findings but in another setting, that is hard to publish. In my area this usually manifests itself as a paper that takes a concept or finding from developed or increasingly emerging markets, applies it to a frontier market and finds the same findings. Salami slicing works this way also, or rather, not. Give the reader a solid reason for reading the paper

Coherence: Some papers are a mess. There is a good reason for, and again this is in my area, a conventional layout: introduction, previous literature, data and methodology, findings, robustness checks, conclusion and recommendations. It works to aid the writer and more important the reader in understanding the flow of the paper. Too many or too few sections, lack of integration across same, a sense where there are multiple authors of multiple voices rather than one, all these make it hard to read and hard to understand. Remember, this is a discourse, a communication. Make the paper clear

Completeness: some papers are simply not complete. For most publishers now there is a technical screening before it hits the editor; are the manuscripts, tables, figures, data etc. in the submission? Has it passed the plagiarism screening? Is it legible? Sometimes people simply forget to include material. It is uncommon but not unknown to have papers that have <to be added – Jim> or something similar. If it’s not complete, it’s not going anywhere. Complete the paper.

Legibility: At times I feel like channelling Samuel L. Jackson, discussing linguistics with Brett, in Pulp Fiction. English is overwhelmingly the language of academic publishing. If the language is poorly structured, riddled with errors syntactical and lexical, then it’s going to be rejected. Get it proofread, even if you are a native English speaker.

Correctness. A paper needs, especially if it is going to challenge established wisdom, to be very well constructed and to leave the reader feeling that yes there is a solid challenge. If the paper misses a whole pile of literature, has bad statistics, overambitious conclusions drawn from fuzzy data, is in general riddled with poor science, then it’s going to go down. Alternative perspectives are great, but being wrong is an alternative to being right. Check your science.

Strength: This is often the case when papers are from junior researchers or are driving forward a new area. At the end we want to know – so now what do we do or where do we go? If the paper cant tell us that, perhaps because of some of the other issues noted here or because the paper spent too long or in too rambling a way to get to the point, then it is not going to prosper. Make it strong but grounded.

Replicability: Data integrity and replicability are becoming key concerns of journal editors. Some have adopted a policy of having data and commands deposited with the paper. In general however the paper should be complete in its descriptions so that someone with the same or similar data can reproduce the flow. Explain what data, where sourced, what cleaning etc.; outline the nature of the theoretical steps; explain the experiments. Many of these explanations, which can be quite long, can be placed now as supplemental appendices, and should be so done. That way the paper as such can be short and pointed and the interested replicator can go to the appendices for detail. If there is a sense that this can’t be replicated, then it’s incomplete and poorly written and will crash. Make it reproducible

Courtesy: The academy is quite small once you get into paper writing and reviewing. I have had occasion to reject a paper from a journal knowing, as I had been the reviewer just two weeks before, that the paper authors had not made any effort to address my previous concerns. That doesn’t mean agreeing with them, it does mean addressing them. Sending a literally identical paper sequentially rejected to a multiple of journals will get you a bad reputation and you WILL meet as editors or other gatekeepers people whose views you have blown off. Address the concerns.

Bad Luck: ideas and topics go into and out of vogue. It is not uncommon to see two or more similar papers addressing similar areas being submitted. In that case there is an element of luck. Generally I will try to track back, via working papers dates, and see who has some claim on priority. This, by the way, is another reason why working papers and conference presentations are useful; they show intellectual priority. At any rate, Solomonic judgements are sometimes required. Be swift, but sure, I suggest.

I found the following sites useful in preparing the above.

http://thebjps.typepad.com/my-blog/2015/01/deskrejectionfrench.html

http://patthomson.net/2013/05/20/seven-reasons-why-paper-are-rejected-by-journals/

http://patthomson.net/2011/09/11/one-reason-why-journal-articles-get-rejected/

http://www.elsevier.com/connect/8-reasons-i-rejected-your-article

http://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Doctoral_Resources/Daft_Why_I_recommended_your_manuscript_be_rejected.pdf

http://robjhyndman.com/hyndsight/quick-rejection/

http://dailynous.com/2015/01/22/reasons-you-rejected-a-paper/

http://www.theenglishedition.com/wordpress/?p=138

http://www.deakin.edu.au/__data/assets/pdf_file/0011/269831/reasons_papers_rejected-_24.08.pdf

http://www.editage.com/insights/most-common-reasons-for-journal-rejections

http://robjhyndman.com/hyndsight/ijf-rejections/

http://www.rcjournal.com/contents/10.04/10.04.1246.pdf

http://www.sfedit.net/rejection.pdf