So far we have considered review times of a single reviewer. However, editors usually need more than one review in order to judge whether to publish an article. In the case of our data from JSCS, the sub-editor aims for two reviews per article and sent invitations to five reviewers on average—one known and four other. While this review strategy indeed resulted in two reviews per article on average (2.34 to be exact), 9 articles were published after receiving only one review, 24 after 2 reviews, 21 after 3 and 4 after 4 reviews. This discrepancy between the target number of reviews and the number of reviews actually received stems from the difference in the probability of finishing the report between known and other reviewers. We are going to call this probability the completion rate.

Using partial distributions we can easily simulate the effects of any editorial strategy and find the number of reviewers needed to achieve a certain number of reviews per article. We will use the average time of receiving two reviews as a measure of the effectiveness of each strategy. Figure 9 shows these average times under the assumption that the invited reviewer always writes the report (the completion rate equals 1 for both known and other reviewers) as a function of the number of reviewers. The average time decreases as the number of reviewers increases. Results for known and other reviewers are found to be very similar. This is intuitive and consistent with our prediction made in "Review time" section.

Fig. 9 Average time of acquiring two reviews for known (empty circles) and other (filled black circles) reviewers when all reviewer finish their reviews Full size image

The assumption that an invitation always results in a report is not realistic. If we want to take into account the fact that the actual completion rate of the review process for a single reviewer is smaller than 1, especially for other reviewers, then some additional strategy needs to be introduced to deal with the cases when two reviews are not received at all. In our simulations, we decided to use a simple strategy: if two reviews are not received, then invitations are resent to the same number of reviewers. This procedure is repeated if necessary until reviewers produce two reports in total. While this is not the most effective and time-efficient strategy which we would suggest to editors, it still allows us to study the consequences of the difference between the completion rates of known and other reviewers.

Figure 10 is analogous to Fig. 9—in that it shows the average time of receiving two reviews—but this time we used the actual completion rates taken from the sample (89 % for known, 31 % for other reviewers) and employed the policy described in the previous paragraph. As can be clearly seen, the difference in completion rates between known and other reviewers results in a completely different dynamics. Other reviewers are far less effective. Their average review time is much higher: for example, two reviews can be received from 2 known reviewers after 32 days, but other reviewers finish the set of 2 reviews after 70 days. Even as the number of reviewers increases, this difference remains significant.

Fig. 10 Average time of acquiring two reviews for known (empty circles) and other (filled black circles) reviewers with completion rate taken into account. Filled polygon represents standard deviation Full size image

However, in "Review time" section, we have shown that distributions of review time for known and other reviewers are very similar, which suggests that the completion rate is the leading factor during the review process. This claim is partially supported by results presented in Fig. 9. If that claim is indeed valid, then one known reviewer should be ”worth” 89/31 % other reviewers and conversely one other reviewer is ”worth” 31/89 % known reviewers. By ”worth”, we mean that proportionally substituting one type of reviewer for another should yield the same results. Figure 11, where the X axis for one type of reviewers was rescaled to match their worth in the other type of reviewers, confirms this prediction. The average number of days after which 2 reviews are acquired are similar and standard deviations, while not exactly the same—which is to be expected are comparable.

Fig. 11 Same as Fig. 10 but with the X axis rescaled for other reviewers Full size image

So far we have studied separately known and other reviewers. However, as explained in "Review process and initial data analysis" section, the group of reviewers invited to review an article usually contains reviewers of both kinds. Figure 12 shows the average time of acquiring two reviews when reviewer types are mixed with different proportions. As one could expect, the average time decreases with the increasing total number of reviewers and known reviewers are far more effective than other. Still, by rescaling the X axis—that is by expressing the worth of one kind of reviewer using another—we get similar results (Fig. 13).

Fig. 12 Average time of acquiring two reviews for a group of mixed reviewers. The X axis—total number of reviewers. Curves correspond to various numbers of known reviewers: 0 known—top curve, 10 known—bottom curve Full size image

Fig. 13 Same as Fig. 12 but with rescaled X axis Full size image

Information about average times in groups of mixed reviewers, expressed in a slightly different way in Fig. 14 and summarised in Table 1 can potentially be of great importance for editors and act as a guide in determining the optimal number of reviewers. For example, in order to receive two reviews after about 30 days, one needs to invite 7 other reviewers, 2 known or a mixed group of 4 other and 1 known. That last option is consistent with the choice made by the sub-editor of JSCS who provided us with the data.

Fig. 14 Average time of acquiring two reviews for a group of mixed reviewers Full size image

Table 1 Average number of days needed to receive two reviews from a group of reviewers with a given number of known (columns) and other (rows) reviewers Full size table

It is important to note that editors may be tempted to invite only known reviewers, which would lead to shorter review times. However, such a policy would not only be unrealistic but also inadvisable. The pool of potential known reviewers is limited and editors would be forced to invite the same reviewers several times within a short time frame. This, in turn, could discourage such reviewers and make them more likely to turn down invitations, further reducing the pool. It gives us an idea that the process of selecting reviewers could be modeled as an optimization problem within an agent-based simulation framework (where other factors, e.g. the quality of reviewers (Ausloos et al. 2015), could be taken into account), however we leave it to further studies.