John Ioannidis, a professor at Stanford University and one of the most highly cited researchers in the world, has come up with some startling figures about meta-analyses. His new paper, published today in Milbank Quarterly (accompanied by this commentary), suggests that the number of systematic reviews and meta-analyses in literature have each increased by more than 2500% since 1991. We asked Ioannidis — who is perhaps most well known for his 2005 paper “Why Most Published Research Findings Are False” (and was featured in a previous Retraction Watch Q&A article) — why such a massive boost these publication types in scholarly literature is potentially harmful.

Retraction Watch: You say that the numbers of systematic reviews and meta-analyses have reached “epidemic proportions,” and that there is currently a “massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses.” Indeed, you note the number of each has risen more than 2500% since 1991, often with more than 20 meta-analyses on the same topic. Why the massive increase, and why is it a problem?

John Ioannidis: The increase is a consequence of the higher prestige that systematic reviews and meta-analyses have acquired over the years, since they are (justifiably) considered to represent the highest level of evidence. Many scientists now want to do them, leading journals want to publish them, and sponsors and other conflicted stakeholders want to exploit them to promote their products, beliefs, and agendas. Systematic reviews and meta-analyses that are carefully done and that are done by players who do not have conflicts and pre-determined agendas are not a problem, quite the opposite. The problem is that most of them are not carefully done and/or are done with pre-determined agendas on what to find and report.

RW: According to your paper, these types of papers have become “easily produced publishable units or marketing tools.” What do you mean by that? How should systematic reviews and meta-analyses be used?

JI: In the past, a company that wanted to promote its products had to get a number of opinion makers — i.e. prestigious academics — to give talks and write editorials and other expert pieces about these products. As expert-based medicine of this sort declined and randomized trials became more influential, a company shifted its preference to trying to manipulate randomized trial results so as to convince the community. Now that systematic reviews and meta-analyses have become even more highly recognized than randomized trials, emphasis is shifting on dominating the results and conclusions of systematic reviews and meta-analyses. So, this has become the latest marketing tool, still serving expert-based medicine in essence. Moreover, given that the methods of performing systematic reviews have become more widespread and easier to apply (actually mis-apply, most of the time) lots of authors see systematic reviews and meta-analyses as a way to build a CV.

RW: Are there better ways to present systematic reviews? What about as living documents on the internet to which new results can be added, as a commentary accompanying your new paper proposes?

JI: In principle any new study should start from a systematic review of what we already know (to even justify the need for its conduct and its design) and should end with an updating of what we know after this new study, again in the systematic review framework. So, I am in favor of this concept. At the same time, I feel that we have missed a great opportunity, because we keep thinking about systematic reviews and meta-analyses as retrospective assessments of past evidence. We should think of them more as prospective integration of evidence for larger research agendas that are prospectively designed. They should become the main type of primary research, rather than just try to compile fragments of published, distorted, selectively available information. Fields that have promoted consortia and multi-team collaborative research are well familiar with this approach.

RW: What can universities, journal publishers and individual academics do to tackle “unnecessary, misleading, and/or conflicted” reports?

JI: Like in any type of research, it is an issue of defending quality and high standards. We see problems of low quality and low standards in almost any type of research, so systematic reviews are not an exception. The reward and incentives system can play a key role in encouraging or discouraging some types of scientific behavior, and universities as well as journals are key gatekeepers. Individual academics can also aim to get better training in proper methods for conducting meta-analyses, and set higher goals in the systematic reviews and meta-analyses that they conduct.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Share this: Email

Facebook

Twitter

