1. Introduction

sensu lato profiled on that blog or associated with either blacklist may have suffered reputational damage. Rather than observing the merit or demerit of each individual paper, select academics started a new trend in which they were judged based on the venue where they had published by using Beall’s blacklists of POAJs and POAPs as their guiding measure. Reputational damage caused by the existence and misuse of blacklists is one of their caveats [ Academic publishing is full of challenges and is in a dynamic state of change. One of the most challenging of these is the issue of “predatory” publishing, a concept that came to the fore of the discussion on academic publishing when a US librarian, Jeffrey Beall, started a blog that denounced what he perceived to be unscholarly activities by select open access (OA) journals and publishers. Beall referred to such entities as “predatory” OA journals and publishers, hereafter POAJs and POAPs, respectively. As Beall recorded more and more cases, his blog rapidly gained popularity. Beall created two blacklists that would prove to be the subject of great controversy, and both praise and criticism, namely one for POAJs and a separate one for POAPs. Using a set of established criteria, which were broad and qualitative in nature, Beall began to list hundreds of new POAJs and POAPs annually, and his list of POAPs exceeded 1000 entries at the beginning of 2017. The publication of those blacklists may have been one partial reason for the shuttering of Beall’s blog, as was suggested by Beall himself [ 1 ]. 1 Before then, Beall had called on the ban of such POAJs and POAPs [ 2 ], but was cautious by referring to them widely and loosely as “potentially, possible or probable predatory”. However, entitiesprofiled on that blog or associated with either blacklist may have suffered reputational damage. Rather than observing the merit or demerit of each individual paper, select academics started a new trend in which they were judged based on the venue where they had published by using Beall’s blacklists of POAJs and POAPs as their guiding measure. Reputational damage caused by the existence and misuse of blacklists is one of their caveats [ 3 ].

7, Beall developed a first, second and third edition of criteria to determine “potential, possible, or probable” POAJs and POAPs. 2 Each edition was presumably created to have an increased power to detect a true predatory journal. This power is similar to the sensitivity rate of screening tests for diseases in the field of medicine [ 4 ]. In the medical field, the sensitivity rate shows the power of screening tests to identify a particular disease from a sample of patients who have the illness (i.e., a true positive). In this framework, the power of the criteria is to detect predatory behavior from a sample of journals that are predatory. However, even the third edition was not 100% powerful, and could not detect all POAJs and POAPs. In addition, screening tests, and hence criteria, should have the power to detect true non-predatory entities and exclude these from a blacklist. This is the specificity rate in medicine [ 4 ]. It is the strength of the medical tests to not detect a particular disease for a sample of patients who do not have the disease (i.e., a true negative). In this framework, it is the power of the criteria to detect non-predatory journals and publishers from a sample of journals that are in fact non-predatory. A reduction in the latter power (the specificity rate) increases the rate of including non-predatory journals and publishers in the list erroneously based on the criteria (i.e., a false positive). A false positive, i.e., accepting that a journal or publisher is “predatory” (i.e., a POAJ or a POAP) when it is not, is very possible given the general and possibly erroneous nature of Beall’s criteria. Olivarez et al. [ 5 ] stated: “An evaluator might label a scholarly journal as “predatory” when it is not, or may disregard the article as being without merit solely because it is published in a journal found on Beall’s List.” Their study found that out of 81 well-regarded academic journals in the field of library and information science, 45 (i.e., over 50%) were “predatory” using Beall’s criteria by an independent panel of three experts, but were academically valid according to these subject experts. 3 This shows a very low specificity rate and hence a very high false positive rate when using Beall’s criteria to assess deceptive practices of publishers and journals. That study confirmed that Beall’s lists were highly flawed and thus misleading, fortifying earlier claims [ 6 8 ]. A very recent study [ 9 ] concluded that the “common views about predatory journals, (e.g., no peer review) may not always be true, and that a grey zone between legitimate and presumed predatory journals exists”.

In a 2016 blog 4 entry, Walt Crawford found that Beall’s blacklists were unreliable and flawed [ 10 ], a notion supported by Teixeira da Silva [ 6 7 ]. Crawford argued that those blacklists were based on circumstantial evidence and on the concept of “just because”, and constructed on the basis of “trust me”. Crawford critically assessed Beall’s blog from 2012 to January 2016 and across all posts could only find some discussion in the blog for only 230 (53 journals and 177 publishers) of the 1834 journals and publishers in Beall’s 2016 POAJ and POAP blacklists. Hence, no clear criteria or independently verifiable reasons for the inclusion of 87.5% of the journals and publishers on Beall’s blacklists could be accounted for. This massive discrepancy suggests how misleading and erroneous Beall’s blacklists were.

Journal of Scholarly Publishing ( JSP ; University of Toronto Press) [ Crawford created a list of these 230 OA journals and publishers and, depending on the discussions in Beall’s blog from 2012–2016, classified them as being either “no”, “weak”, “maybe” or “strong” cases of deceptive practices. 5 Crawford concluded that five were “no” cases (2.2%), 69 were “weak” cases (20%), 43 were “maybe” cases (18.7%) and 113 were “strong” cases (49.1%). These findings by Crawford are important as they show how unreliable Beall’s blacklists were and that any research based on these lists, such as a 2017 paper by Derek Pyne at Thompson Rivers University in the; University of Toronto Press) [ 11 ], would likely be extremely faulty. This risk was clearly stated by Crawford on his blog. 6

However, the influence by Beall and his blacklists was profound. Beall retired in early 2018. 7 During his several years dominating this topic, Beall was widely cited, either in the form of his blog, or his own publications. Now, after the closure of his blog [ 12 ], an important post-publication peer review (PPPR) of the literature that Beall published, as well as the literature that cited him, his blog, or his blacklists, is needed. This is because, in essence, papers that use Beall’s blacklists of POAJs and POAPs may have been making serious methodological errors, by transferring Beall’s false positives into their research which would compound the false positives they might face with their own research. Since Beall’s blacklists carry false positives, any methodology that used, or relied on them, could be automatically flawed by association [ 3 ]. Thus, studies that used Beall’s blacklists for any quantitative analyses may be intrinsically flawed and should be subject to careful scrutiny and correction, if necessary.

Within the PPPR analysis of Beall-influenced literature, we discuss a paper that was covered by many media outlets. 8 A study (hereafter, “the Study”) by Pyne [ 11 ] made the first ever claim that academics were being remunerated, and thereby rewarded, based on “predatory publications”, i.e., papers published in Beall-listed POAJs and POAPs. The Study claimed that researchers at a small business school in Canada were being financially rewarded for having published “predatory publications”, a claim that was refuted by Tsigaris in January 2019 [ 13 ].

The Study drew several conclusions 9 about the link between the research faculty members at the small business school and “predatory publications”. However, in this paper, we focus on the following two:

“The majority of faculty with research responsibilities at a small Canadian business school have publications in predatory journals” (abstract, p. 137).

“Even honest researchers make mistakes and can be fooled into publishing in predatory journals. However, when researchers demonstrate a pattern of publishing in such journals, suspicions increase.” “It can be seen that 75 per cent of traditional faculty who have predatory journal publications have more than one such publication. Moreover, traditional faculty who have predatory publications have, on average, 4.3 predatory publications.” (p. 150).

The Study concluded that many research faculty members at a small Canadian business school published in “predatory” journals. 10 In fact, the Study used the expression “predatory” over 100 times. The Study never used the term “potentially, possible or probable predatory”, as Beall had suggested. 11 Was this done to create an illusionary truth effect [ 14 ]? Beall had to be cautious about the way he created and advertised his blacklists to reduce legal liability, so he used the terms “potential, possible, or probable predatory publishers and journals”, which can range from not predatory at all to highly predatory, but always implying doubt about any entry on those POAJ and POAP blacklists. Despite this, Beall still encouraged faculty not to publish in these journals. 12 Furthermore, there is a huge gray area that separates poorly managed, start-up and academically questionable journals or publishers, and journals whose only objective was to exploit authors, categories that Beall failed to clearly separate.