Should this 'story' be one publication? Or can the work be broken into distinct publications? Should it be broken up? In a perfect world, what additional information would be obtained before publication? Which claims (conclusions, hypotheses) in the story have strong support and which are new ideas, perhaps with little direct support? The purpose is to have students consider what constitutes a timely publishable unit of information in a particular field and how the ongoing contribution of new ideas and partial proofs stimulates work in the field.

Who will be an author? If there is just one publication, which dataset merits first authorship? The purpose is to discuss the realities of authorship, the need for both students and postdocs to have 'rights' to their own work, and the impact on careers of a single publication in which most of the participants are et al.

Ask the lab to provide a timeline of when particular projects were started and what tools or new information became available during the project and whether these were incorporated into the study. The purpose of this exercise is to teach realism when reviewing: were the questions posed and the methods used timely and updated appropriately within a reasonable span before submitting the manuscript?

As a class exercise, discuss how the project would be formulated today given the 'best techniques' and available information. Compare reality to a design that can take advantage of all new information and techniques available.

Compare the costs of the actual path to information and the best possible approach, both in terms of human effort and materials. Would the best effort require a genome project or other large-scale effort outside the scope of most labs?

Consider the possibilities of partnerships to conduct the best possible study versus individual lab efforts (even individual people efforts). Would the field be best served by waiting for funding for the 'best' project? Would training be better served in individual or large group projects?

If those submitting manuscripts are honest – and most of us are our own best critics – about the timeliness and completeness (given constraints of time, effort, funding) and share the intent to make a solid contribution on an important question, then what we ask of reviewers is that they consider this context in writing the review. Sure, it's easy to trash a manuscript missing a paper published online this week or that fails to spend a million dollars to get a proteome of the cell types in question – but is this realistic? The trend to read manuscripts in PDF format on a screen also means that it's tempting to just start typing comments without first considering the manuscript as a whole – perhaps the issue so bothering you 'right now' is actually addressed in a subsequent section, perhaps even in the Materials and methods, now shuttled to the end of nearly every manuscript. With paper manuscripts, most reviewers read the entire thing – perhaps dragging it around town for days – and then sat and composed a review that had the perspective of a complete reading. Those old enough to remember paper manuscripts arriving in bulky packages in the mail may have learned better habits of scholarship imposed by the medium. Now it's up to all of us to teach 'best reviewing practices' to our students and postdocs and to use them ourselves.