Quality of systematic reviews is poor, our fault, our responsibility

JBI Database of Systematic Review and Implementation Reports
August 2017 – Volume 15 – Issue 8

Quality of systematic reviews is poor, our fault, our responsibility
Campbell, Jared M.
JBI Database of Systematic Reviews and Implementation Reports . 15(8):1977-1978, August 2017.
Meta-research is research that is carried out with existing research as the subject of investigation. As systematic reviews – themselves a form of meta-research – have become more widespread, they in turn have come to the attention of meta-research as available subject matter (meta-meta-research, perhaps?). Researchers’ fascination with their own “meta” may be viewed by some as amusing (meta-meta-meta-research!), however the meta endeavours have uncovered some worrying findings.

While exceptions exist, chiefly in high impact1,2 and systematic review specific journals,3 the conduct, reporting and publication of systematic reviews of poor quality is prevalent to the point of being the norm rather than the exception.4-7 Worryingly, despite the growing prominence of explicit guidelines (like the PRISMA statement8 and the AMSTAR checklist9) as well as the expanding profile of evidence-based practice organisations that focus on systematic reviews (Cochrane, the Campbell Collaboration and the Joanna Briggs Institute), the average quality of systematic reviews in many areas has not meaningfully improved over time,10,11 or has even worsened.12

Considering this state of affairs, it seems reasonable to suggest that although evidence-based practice organizations have succeeded in evangelizing the importance of systematic reviews, they have not been successful at stressing the importance of reviews being conducted and reported in a thorough and rigorous manner. In this way they have counterintuitively contributed to the growing number of poor quality and unreliable systematic reviews despite their direct and persistent attempts to the contrary.

Organisations and individuals that are responsible for spreading the popularity of systematic reviews also hold responsibility for safe guarding their quality. As mentioned, systematic review specific journals do an excellent job of enforcing the rigor of reviews published on their own pages, and high impact journals have likewise succeeded in setting the bar high. These types of publications do not have to be exceptions, however. Those of us who most frequently carry out and publish systematic reviews have an increased likelihood of being invited to act as peer reviewers for them. Peer review therefore gives us the opportunity and responsibility to act directly to improve the quality of published systematic reviews. Detailed guidance on the proper conduct and reporting of systematic reviews of diverse types is easily available and accessible,3,9,13-15 along with useful review management tools which can be accessed free of charge (RevMan, Covidence). It therefore cannot be seen as understandable for an article labelled as a systematic review that lacks basic components of the process (i.e. a registered protocol, critical appraisal, or a detailed and comprehensive search) to be considered as a serious candidate for publication.

In our capacity as peer reviewers, editors or authors, the quality of systematic reviews is not an area where compromise should be viewed as acceptable. Standards have been agreed upon and set. If systematic reviews are to deserve their status at the preferred resource for informing evidence-based care, they must be upheld.