• Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Systematic Literature Review or Literature Review?

  • 3 minute read
  • 59.4K views

Table of Contents

As a researcher, you may be required to conduct a literature review. But what kind of review do you need to complete? Is it a systematic literature review or a standard literature review? In this article, we’ll outline the purpose of a systematic literature review, the difference between literature review and systematic review, and other important aspects of systematic literature reviews.

What is a Systematic Literature Review?

The purpose of systematic literature reviews is simple. Essentially, it is to provide a high-level of a particular research question. This question, in and of itself, is highly focused to match the review of the literature related to the topic at hand. For example, a focused question related to medical or clinical outcomes.

The components of a systematic literature review are quite different from the standard literature review research theses that most of us are used to (more on this below). And because of the specificity of the research question, typically a systematic literature review involves more than one primary author. There’s more work related to a systematic literature review, so it makes sense to divide the work among two or three (or even more) researchers.

Your systematic literature review will follow very clear and defined protocols that are decided on prior to any review. This involves extensive planning, and a deliberately designed search strategy that is in tune with the specific research question. Every aspect of a systematic literature review, including the research protocols, which databases are used, and dates of each search, must be transparent so that other researchers can be assured that the systematic literature review is comprehensive and focused.

Most systematic literature reviews originated in the world of medicine science. Now, they also include any evidence-based research questions. In addition to the focus and transparency of these types of reviews, additional aspects of a quality systematic literature review includes:

  • Clear and concise review and summary
  • Comprehensive coverage of the topic
  • Accessibility and equality of the research reviewed

Systematic Review vs Literature Review

The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper. That is, it includes an introduction, description of the methods used, a discussion and conclusion, as well as a reference list or bibliography.

A systematic review, however, includes entirely different components that reflect the specificity of its research question, and the requirement for transparency and inclusion. For instance, the systematic review will include:

  • Eligibility criteria for included research
  • A description of the systematic research search strategy
  • An assessment of the validity of reviewed research
  • Interpretations of the results of research included in the review

As you can see, contrary to the general overview or summary of a topic, the systematic literature review includes much more detail and work to compile than a standard literature review. Indeed, it can take years to conduct and write a systematic literature review. But the information that practitioners and other researchers can glean from a systematic literature review is, by its very nature, exceptionally valuable.

This is not to diminish the value of the standard literature review. The importance of literature reviews in research writing is discussed in this article . It’s just that the two types of research reviews answer different questions, and, therefore, have different purposes and roles in the world of research and evidence-based writing.

Systematic Literature Review vs Meta Analysis

It would be understandable to think that a systematic literature review is similar to a meta analysis. But, whereas a systematic review can include several research studies to answer a specific question, typically a meta analysis includes a comparison of different studies to suss out any inconsistencies or discrepancies. For more about this topic, check out Systematic Review VS Meta-Analysis article.

Language Editing Plus

With Elsevier’s Language Editing Plus services , you can relax with our complete language review of your systematic literature review or literature review, or any other type of manuscript or scientific presentation. Our editors are PhD or PhD candidates, who are native-English speakers. Language Editing Plus includes checking the logic and flow of your manuscript, reference checks, formatting in accordance to your chosen journal and even a custom cover letter. Our most comprehensive editing package, Language Editing Plus also includes any English-editing needs for up to 180 days.

PowerPoint Presentation of Your Research Paper

How to Make a PowerPoint Presentation of Your Research Paper

Strong Research Hypothesis

Step-by-Step Guide: How to Craft a Strong Research Hypothesis

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • RAMESES publication standards: meta-narrative reviews. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. Wong G, et al. BMC Med. 2013 Jan 29;11:20. doi: 10.1186/1741-7015-11-20. BMC Med. 2013. PMID: 23360661 Free PMC article.
  • A Primer on Systematic Reviews and Meta-Analyses. Nguyen NH, Singh S. Nguyen NH, et al. Semin Liver Dis. 2018 May;38(2):103-111. doi: 10.1055/s-0038-1655776. Epub 2018 Jun 5. Semin Liver Dis. 2018. PMID: 29871017 Review.
  • Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Hedin RJ, et al. Anesth Analg. 2016 Oct;123(4):1018-25. doi: 10.1213/ANE.0000000000001452. Anesth Analg. 2016. PMID: 27537925 Review.
  • The Association between Emotional Intelligence and Prosocial Behaviors in Children and Adolescents: A Systematic Review and Meta-Analysis. Cao X, Chen J. Cao X, et al. J Youth Adolesc. 2024 Aug 28. doi: 10.1007/s10964-024-02062-y. Online ahead of print. J Youth Adolesc. 2024. PMID: 39198344
  • The impact of chemical pollution across major life transitions: a meta-analysis on oxidative stress in amphibians. Martin C, Capilla-Lasheras P, Monaghan P, Burraco P. Martin C, et al. Proc Biol Sci. 2024 Aug;291(2029):20241536. doi: 10.1098/rspb.2024.1536. Epub 2024 Aug 28. Proc Biol Sci. 2024. PMID: 39191283 Free PMC article.
  • Target mechanisms of mindfulness-based programmes and practices: a scoping review. Maloney S, Kock M, Slaghekke Y, Radley L, Lopez-Montoyo A, Montero-Marin J, Kuyken W. Maloney S, et al. BMJ Ment Health. 2024 Aug 24;27(1):e300955. doi: 10.1136/bmjment-2023-300955. BMJ Ment Health. 2024. PMID: 39181568 Free PMC article. Review.
  • Bridging disciplines-key to success when implementing planetary health in medical training curricula. Malmqvist E, Oudin A. Malmqvist E, et al. Front Public Health. 2024 Aug 6;12:1454729. doi: 10.3389/fpubh.2024.1454729. eCollection 2024. Front Public Health. 2024. PMID: 39165783 Free PMC article. Review.
  • Strength of evidence for five happiness strategies. Puterman E, Zieff G, Stoner L. Puterman E, et al. Nat Hum Behav. 2024 Aug 12. doi: 10.1038/s41562-024-01954-0. Online ahead of print. Nat Hum Behav. 2024. PMID: 39134738 No abstract available.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ingenta plc
  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Introduction to Systematic Reviews

  • Reference work entry
  • First Online: 20 July 2022
  • pp 2159–2177
  • Cite this reference work entry

systematic review and systematic literature review

  • Tianjing Li 3 ,
  • Ian J. Saldanha 4 &
  • Karen A. Robinson 5  

405 Accesses

1 Citations

A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter. We discuss the motivation for and importance of performing systematic reviews and their relevance to trialists. We introduce the key steps in completing a systematic review, including framing the question, searching for and selecting studies, collecting data, assessing risk of bias in included studies, conducting a qualitative synthesis and a quantitative synthesis (i.e., meta-analysis), grading the certainty of evidence, and writing the systematic review report. We also describe how to identify systematic reviews and how to assess their methodological rigor. We discuss the challenges and criticisms of systematic reviews, and how technology and innovations, combined with a closer partnership between trialists and systematic reviewers, can help identify effective and safe evidence-based practices more quickly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

systematic review and systematic literature review

What Is the Difference Between a Systematic Review and a Meta-analysis?

systematic review and systematic literature review

Systematic Reviewing

AHRQ (2015) Methods guide for effectiveness and comparative effectiveness reviews. Available from https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview . Accessed on 27 Oct 2019

Andersen MZ, Gülen S, Fonnes S, Andresen K, Rosenberg J (2020) Half of Cochrane reviews were published more than two years after the protocol. J Clin Epidemiol 124:85–93. https://doi.org/10.1016/j.jclinepi.2020.05.011

Article   Google Scholar  

Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, Gartlehner G, Hartling L, McPheeters M, Morgan LC, Reston J, Sista P, Whitlock E, Chang S (2015) Grading the strength of a body of evidence when assessing health care interventions: an EPC update. J Clin Epidemiol 68(11):1312–1324

Borah R, Brown AW, Capers PL, Kaiser KA (2017) Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open 7(2):e012545. https://doi.org/10.1136/bmjopen-2016-012545

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JP, Oliver S (2014) How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156–165. https://doi.org/10.1016/S0140-6736(13)62229-1

Clarke M, Chalmers I (1998) Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA 280(3):280–282

Cooper NJ, Jones DR, Sutton AJ (2005) The use of systematic reviews when designing studies. Clin Trials 2(3):260–264

Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, Hozo I, Clarke M, Sargent D, Schell MJ (2011) Optimism bias leads to inconclusive results-an empirical study. J Clin Epidemiol 64(6):583–593. https://doi.org/10.1016/j.jclinepi.2010.09.007

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol 91:23–30

Equator Network. Reporting guidelines for systematic reviews. Available from https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=systematic-reviews-and-meta-analyses&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+ . Accessed 9 Mar 2020

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ (2016) Panel for updating guidance for systematic reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 354:i3507. https://doi.org/10.1136/bmj.i3507 . Erratum in: BMJ 2016 Sep 06 354:i4853

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64(4):383–394

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) (2019a) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester

Google Scholar  

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (2019b) Standards for the conduct of new Cochrane intervention reviews. In: JPT H, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (eds) Methodological expectations of Cochrane intervention reviews. Cochrane, London

IOM (2011) Committee on standards for systematic reviews of comparative effectiveness research, board on health care services. In: Eden J, Levit L, Berg A, Morton S (eds) Finding what works in health care: standards for systematic reviews. National Academies Press, Washington, DC

Jonnalagadda SR, Goyal P, Huffman MD (2015) Automating data extraction in systematic reviews: a systematic review. Syst Rev 4:78

Krnic Martinic M, Pieper D, Glatt A, Puljak L (2019) Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol 19(1):203. Published 4 Nov 2019. https://doi.org/10.1186/s12874-019-0855-0

Lasserson TJ, Thomas J, Higgins JPT (2019) Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327(4):248–254

Lau J (2019) Editorial: systematic review automation thematic series. Syst Rev 8(1):70. Published 11 Mar 2019. https://doi.org/10.1186/s13643-019-0974-z

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6(7):e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, Chalmers I (2016) Towards evidence based research. BMJ 355:i5440. https://doi.org/10.1136/bmj.i5440

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC (2018) Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 9(4):602–614. https://doi.org/10.1002/jrsm.1287

Michelson M, Reuter K (2019) The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun 16:100443. https://doi.org/10.1016/j.conctc.2019.100443 . Erratum in: Contemp Clin Trials Commun 2019 16:100450

Moher D, Liberati A, Tetzlaff J (2009) Altman DG; PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269. W64

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

NIHR HTA Stage 1 guidance notes. Available from https://www.nihr.ac.uk/documents/hta-stage-1-guidance-notes/11743 ; Accessed 10 Mar 2020

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D (2016) Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al (eds) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester, pp 349–374

Chapter   Google Scholar  

Robinson KA (2009) Use of prior research in the justification and interpretation of clinical trials. Johns Hopkins University

Robinson KA, Goodman SN (2011) A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 154(1):50–55. https://doi.org/10.7326/0003-4819-154-1-201101040-00007

Rouse B, Cipriani A, Shi Q, Coleman AL, Dickersin K, Li T (2016) Network meta-analysis for clinical practice guidelines – a case study on first-line medical therapies for primary open-angle glaucoma. Ann Intern Med 164(10):674–682. https://doi.org/10.7326/M15-2367

Saldanha IJ, Lindsley K, Do DV et al (2017) Comparison of clinical trial and systematic review outcomes for the 4 most prevalent eye diseases. JAMA Ophthalmol 135(9):933–940. https://doi.org/10.1001/jamaophthalmol.2017.2583

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 7:10

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007) How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 147(4):224–233

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S (2019) Chapter 2: determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

USPSTF U.S. Preventive Services Task Force Procedure Manual (2017). Available from: https://www.uspreventiveservicestaskforce.org/uspstf/sites/default/files/inline-files/procedure-manual2017_update.pdf . Accessed 21 May 2020

Whitaker (2015) UCSF guides: systematic review: when will i be finished? https://guides.ucsf.edu/c.php?g=375744&p=3041343 , Accessed 13 May 2020

Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J (2016) Churchill R; ROBIS group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234. https://doi.org/10.1016/j.jclinepi.2015.06.005

Download references

Author information

Authors and affiliations.

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

Department of Health Services, Policy, and Practice and Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Ian J. Saldanha

Department of Medicine, Johns Hopkins University, Baltimore, MD, USA

Karen A. Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tianjing Li .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Epidemiology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins University, Baltimore, MD, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Li, T., Saldanha, I.J., Robinson, K.A. (2022). Introduction to Systematic Reviews. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_194

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_194

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Reference management. Clean and simple.

How to write a systematic literature review [9 steps]

Systematic literature review

What is a systematic literature review?

Where are systematic literature reviews used, what types of systematic literature reviews are there, how to write a systematic literature review, 1. decide on your team, 2. formulate your question, 3. plan your research protocol, 4. search for the literature, 5. screen the literature, 6. assess the quality of the studies, 7. extract the data, 8. analyze the results, 9. interpret and present the results, registering your systematic literature review, frequently asked questions about writing a systematic literature review, related articles.

A systematic literature review is a summary, analysis, and evaluation of all the existing research on a well-formulated and specific question.

Put simply, a systematic review is a study of studies that is popular in medical and healthcare research. In this guide, we will cover:

  • the definition of a systematic literature review
  • the purpose of a systematic literature review
  • the different types of systematic reviews
  • how to write a systematic literature review

➡️ Visit our guide to the best research databases for medicine and health to find resources for your systematic review.

Systematic literature reviews can be utilized in various contexts, but they’re often relied on in clinical or healthcare settings.

Medical professionals read systematic literature reviews to stay up-to-date in their field, and granting agencies sometimes need them to make sure there’s justification for further research in an area. They can even be used as the starting point for developing clinical practice guidelines.

A classic systematic literature review can take different approaches:

  • Effectiveness reviews assess the extent to which a medical intervention or therapy achieves its intended effect. They’re the most common type of systematic literature review.
  • Diagnostic test accuracy reviews produce a summary of diagnostic test performance so that their accuracy can be determined before use by healthcare professionals.
  • Experiential (qualitative) reviews analyze human experiences in a cultural or social context. They can be used to assess the effectiveness of an intervention from a person-centric perspective.
  • Costs/economics evaluation reviews look at the cost implications of an intervention or procedure, to assess the resources needed to implement it.
  • Etiology/risk reviews usually try to determine to what degree a relationship exists between an exposure and a health outcome. This can be used to better inform healthcare planning and resource allocation.
  • Psychometric reviews assess the quality of health measurement tools so that the best instrument can be selected for use.
  • Prevalence/incidence reviews measure both the proportion of a population who have a disease, and how often the disease occurs.
  • Prognostic reviews examine the course of a disease and its potential outcomes.
  • Expert opinion/policy reviews are based around expert narrative or policy. They’re often used to complement, or in the absence of, quantitative data.
  • Methodology systematic reviews can be carried out to analyze any methodological issues in the design, conduct, or review of research studies.

Writing a systematic literature review can feel like an overwhelming undertaking. After all, they can often take 6 to 18 months to complete. Below we’ve prepared a step-by-step guide on how to write a systematic literature review.

  • Decide on your team.
  • Formulate your question.
  • Plan your research protocol.
  • Search for the literature.
  • Screen the literature.
  • Assess the quality of the studies.
  • Extract the data.
  • Analyze the results.
  • Interpret and present the results.

When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

You may also need to team up with a librarian to help with the search, literature screeners, a statistician to analyze the data, and the relevant subject experts.

Define your answerable question. Then ask yourself, “has someone written a systematic literature review on my question already?” If so, yours may not be needed. A librarian can help you answer this.

You should formulate a “well-built clinical question.” This is the process of generating a good search question. To do this, run through PICO:

  • Patient or Population or Problem/Disease : who or what is the question about? Are there factors about them (e.g. age, race) that could be relevant to the question you’re trying to answer?
  • Intervention : which main intervention or treatment are you considering for assessment?
  • Comparison(s) or Control : is there an alternative intervention or treatment you’re considering? Your systematic literature review doesn’t have to contain a comparison, but you’ll want to stipulate at this stage, either way.
  • Outcome(s) : what are you trying to measure or achieve? What’s the wider goal for the work you’ll be doing?

Now you need a detailed strategy for how you’re going to search for and evaluate the studies relating to your question.

The protocol for your systematic literature review should include:

  • the objectives of your project
  • the specific methods and processes that you’ll use
  • the eligibility criteria of the individual studies
  • how you plan to extract data from individual studies
  • which analyses you’re going to carry out

For a full guide on how to systematically develop your protocol, take a look at the PRISMA checklist . PRISMA has been designed primarily to improve the reporting of systematic literature reviews and meta-analyses.

When writing a systematic literature review, your goal is to find all of the relevant studies relating to your question, so you need to search thoroughly .

This is where your librarian will come in handy again. They should be able to help you formulate a detailed search strategy, and point you to all of the best databases for your topic.

➡️ Read more on on how to efficiently search research databases .

The places to consider in your search are electronic scientific databases (the most popular are PubMed , MEDLINE , and Embase ), controlled clinical trial registers, non-English literature, raw data from published trials, references listed in primary sources, and unpublished sources known to experts in the field.

➡️ Take a look at our list of the top academic research databases .

Tip: Don’t miss out on “gray literature.” You’ll improve the reliability of your findings by including it.

Don’t miss out on “gray literature” sources: those sources outside of the usual academic publishing environment. They include:

  • non-peer-reviewed journals
  • pharmaceutical industry files
  • conference proceedings
  • pharmaceutical company websites
  • internal reports

Gray literature sources are more likely to contain negative conclusions, so you’ll improve the reliability of your findings by including it. You should document details such as:

  • The databases you search and which years they cover
  • The dates you first run the searches, and when they’re updated
  • Which strategies you use, including search terms
  • The numbers of results obtained

➡️ Read more about gray literature .

This should be performed by your two reviewers, using the criteria documented in your research protocol. The screening is done in two phases:

  • Pre-screening of all titles and abstracts, and selecting those appropriate
  • Screening of the full-text articles of the selected studies

Make sure reviewers keep a log of which studies they exclude, with reasons why.

➡️ Visit our guide on what is an abstract?

Your reviewers should evaluate the methodological quality of your chosen full-text articles. Make an assessment checklist that closely aligns with your research protocol, including a consistent scoring system, calculations of the quality of each study, and sensitivity analysis.

The kinds of questions you'll come up with are:

  • Were the participants really randomly allocated to their groups?
  • Were the groups similar in terms of prognostic factors?
  • Could the conclusions of the study have been influenced by bias?

Every step of the data extraction must be documented for transparency and replicability. Create a data extraction form and set your reviewers to work extracting data from the qualified studies.

Here’s a free detailed template for recording data extraction, from Dalhousie University. It should be adapted to your specific question.

Establish a standard measure of outcome which can be applied to each study on the basis of its effect size.

Measures of outcome for studies with:

  • Binary outcomes (e.g. cured/not cured) are odds ratio and risk ratio
  • Continuous outcomes (e.g. blood pressure) are means, difference in means, and standardized difference in means
  • Survival or time-to-event data are hazard ratios

Design a table and populate it with your data results. Draw this out into a forest plot , which provides a simple visual representation of variation between the studies.

Then analyze the data for issues. These can include heterogeneity, which is when studies’ lines within the forest plot don’t overlap with any other studies. Again, record any excluded studies here for reference.

Consider different factors when interpreting your results. These include limitations, strength of evidence, biases, applicability, economic effects, and implications for future practice or research.

Apply appropriate grading of your evidence and consider the strength of your recommendations.

It’s best to formulate a detailed plan for how you’ll present your systematic review results. Take a look at these guidelines for interpreting results from the Cochrane Institute.

Before writing your systematic literature review, you can register it with OSF for additional guidance along the way. You could also register your completed work with PROSPERO .

Systematic literature reviews are often found in clinical or healthcare settings. Medical professionals read systematic literature reviews to stay up-to-date in their field and granting agencies sometimes need them to make sure there’s justification for further research in an area.

The first stage in carrying out a systematic literature review is to put together your team. You should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

Your systematic review should include the following details:

A literature review simply provides a summary of the literature available on a topic. A systematic review, on the other hand, is more than just a summary. It also includes an analysis and evaluation of existing research. Put simply, it's a study of studies.

The final stage of conducting a systematic literature review is interpreting and presenting the results. It’s best to formulate a detailed plan for how you’ll present your systematic review results, guidelines can be found for example from the Cochrane institute .

systematic review and systematic literature review

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic review and systematic literature review

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • A-Z Publications

Annual Review of Psychology

Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.

  • Andy P. Siddaway 1 , Alex M. Wood 2 , and Larry V. Hedges 3
  • View Affiliations Hide Affiliations Affiliations: 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected] 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected]
  • Vol. 70:747-770 (Volume publication date January 2019) https://doi.org/10.1146/annurev-psych-010418-102803
  • First published as a Review in Advance on August 08, 2018
  • Copyright © 2019 by Annual Reviews. All rights reserved

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Article metrics loading...

Full text loading...

Literature Cited

  • APA Publ. Commun. Board Work. Group J. Artic. Rep. Stand. 2008 . Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol . 63 : 848– 49 [Google Scholar]
  • Baumeister RF 2013 . Writing a literature review. The Portable Mentor: Expert Guide to a Successful Career in Psychology MJ Prinstein, MD Patterson 119– 32 New York: Springer, 2nd ed.. [Google Scholar]
  • Baumeister RF , Leary MR 1995 . The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117 : 497– 529 [Google Scholar]
  • Baumeister RF , Leary MR 1997 . Writing narrative literature reviews. Rev. Gen. Psychol. 3 : 311– 20 Presents a thorough and thoughtful guide to conducting narrative reviews. [Google Scholar]
  • Bem DJ 1995 . Writing a review article for Psychological Bulletin. Psychol . Bull 118 : 172– 77 [Google Scholar]
  • Borenstein M , Hedges LV , Higgins JPT , Rothstein HR 2009 . Introduction to Meta-Analysis New York: Wiley Presents a comprehensive introduction to meta-analysis. [Google Scholar]
  • Borenstein M , Higgins JPT , Hedges LV , Rothstein HR 2017 . Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8 : 5– 18 [Google Scholar]
  • Braver SL , Thoemmes FJ , Rosenthal R 2014 . Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9 : 333– 42 [Google Scholar]
  • Bushman BJ 1994 . Vote-counting procedures. The Handbook of Research Synthesis H Cooper, LV Hedges 193– 214 New York: Russell Sage Found. [Google Scholar]
  • Cesario J 2014 . Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9 : 40– 48 [Google Scholar]
  • Chalmers I 2007 . The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. Treating Individuals: From Randomised Trials to Personalised Medicine PM Rothwell 37– 58 London: Lancet [Google Scholar]
  • Cochrane Collab. 2003 . Glossary Rep., Cochrane Collab. London: http://community.cochrane.org/glossary Presents a comprehensive glossary of terms relevant to systematic reviews. [Google Scholar]
  • Cohn LD , Becker BJ 2003 . How meta-analysis increases statistical power. Psychol. Methods 8 : 243– 53 [Google Scholar]
  • Cooper HM 2003 . Editorial. Psychol. Bull. 129 : 3– 9 [Google Scholar]
  • Cooper HM 2016 . Research Synthesis and Meta-Analysis: A Step-by-Step Approach Thousand Oaks, CA: Sage, 5th ed.. Presents a comprehensive introduction to research synthesis and meta-analysis. [Google Scholar]
  • Cooper HM , Hedges LV , Valentine JC 2009 . The Handbook of Research Synthesis and Meta-Analysis New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Cumming G 2014 . The new statistics: why and how. Psychol. Sci. 25 : 7– 29 Discusses the limitations of null hypothesis significance testing and viable alternative approaches. [Google Scholar]
  • Earp BD , Trafimow D 2015 . Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6 : 621 [Google Scholar]
  • Etz A , Vandekerckhove J 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : e0149794 [Google Scholar]
  • Ferguson CJ , Brannick MT 2012 . Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol. Methods 17 : 120– 28 [Google Scholar]
  • Fleiss JL , Berlin JA 2009 . Effect sizes for dichotomous data. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 237– 53 New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Garside R 2014 . Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how. Innovation 27 : 67– 79 [Google Scholar]
  • Hedges LV , Olkin I 1980 . Vote count methods in research synthesis. Psychol. Bull. 88 : 359– 69 [Google Scholar]
  • Hedges LV , Pigott TD 2001 . The power of statistical tests in meta-analysis. Psychol. Methods 6 : 203– 17 [Google Scholar]
  • Higgins JPT , Green S 2011 . Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 London: Cochrane Collab. Presents comprehensive and regularly updated guidelines on systematic reviews. [Google Scholar]
  • John LK , Loewenstein G , Prelec D 2012 . Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Juni P , Witschi A , Bloch R , Egger M 1999 . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282 : 1054– 60 [Google Scholar]
  • Klein O , Doyen S , Leys C , Magalhães de Saldanha da Gama PA , Miller S et al. 2012 . Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7 : 6 572– 84 [Google Scholar]
  • Lau J , Antman EM , Jimenez-Silva J , Kupelnick B , Mosteller F , Chalmers TC 1992 . Cumulative meta-analysis of therapeutic trials for myocardial infarction. N. Engl. J. Med. 327 : 248– 54 [Google Scholar]
  • Light RJ , Smith PV 1971 . Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ. Rev. 41 : 429– 71 [Google Scholar]
  • Lipsey MW , Wilson D 2001 . Practical Meta-Analysis London: Sage Comprehensive and clear explanation of meta-analysis. [Google Scholar]
  • Matt GE , Cook TD 1994 . Threats to the validity of research synthesis. The Handbook of Research Synthesis H Cooper, LV Hedges 503– 20 New York: Russell Sage Found. [Google Scholar]
  • Maxwell SE , Lau MY , Howard GS 2015 . Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. Am. Psychol. 70 : 487– 98 [Google Scholar]
  • Moher D , Hopewell S , Schulz KF , Montori V , Gøtzsche PC et al. 2010 . CONSORT explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340 : c869 [Google Scholar]
  • Moher D , Liberati A , Tetzlaff J , Altman DG PRISMA Group. 2009 . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339 : 332– 36 Comprehensive reporting guidelines for systematic reviews. [Google Scholar]
  • Morrison A , Polisena J , Husereau D , Moulton K , Clark M et al. 2012 . The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assess. Health Care 28 : 138– 44 [Google Scholar]
  • Nelson LD , Simmons J , Simonsohn U 2018 . Psychology's renaissance. Annu. Rev. Psychol. 69 : 511– 34 [Google Scholar]
  • Noblit GW , Hare RD 1988 . Meta-Ethnography: Synthesizing Qualitative Studies Newbury Park, CA: Sage [Google Scholar]
  • Olivo SA , Macedo LG , Gadotti IC , Fuentes J , Stanton T , Magee DJ 2008 . Scales to assess the quality of randomized controlled trials: a systematic review. Phys. Ther. 88 : 156– 75 [Google Scholar]
  • Open Sci. Collab. 2015 . Estimating the reproducibility of psychological science. Science 349 : 943 [Google Scholar]
  • Paterson BL , Thorne SE , Canam C , Jillings C 2001 . Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis Thousand Oaks, CA: Sage [Google Scholar]
  • Patil P , Peng RD , Leek JT 2016 . What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11 : 539– 44 [Google Scholar]
  • Rosenthal R 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rosnow RL , Rosenthal R 1989 . Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44 : 1276– 84 [Google Scholar]
  • Sanderson S , Tatt ID , Higgins JP 2007 . Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int. J. Epidemiol. 36 : 666– 76 [Google Scholar]
  • Schreiber R , Crooks D , Stern PN 1997 . Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue JM Morse 311– 26 Thousand Oaks, CA: Sage [Google Scholar]
  • Shrout PE , Rodgers JL 2018 . Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 : 487– 510 [Google Scholar]
  • Stroebe W , Strack F 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 59– 71 [Google Scholar]
  • Stroup DF , Berlin JA , Morton SC , Olkin I , Williamson GD et al. 2000 . Meta-analysis of observational studies in epidemiology (MOOSE): a proposal for reporting. JAMA 283 : 2008– 12 [Google Scholar]
  • Thorne S , Jensen L , Kearney MH , Noblit G , Sandelowski M 2004 . Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual. Health Res. 14 : 1342– 65 [Google Scholar]
  • Tong A , Flemming K , McInnes E , Oliver S , Craig J 2012 . Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 12 : 181– 88 [Google Scholar]
  • Trickey D , Siddaway AP , Meiser-Stedman R , Serpell L , Field AP 2012 . A meta-analysis of risk factors for post-traumatic stress disorder in children and adolescents. Clin. Psychol. Rev. 32 : 122– 38 [Google Scholar]
  • Valentine JC , Biglan A , Boruch RF , Castro FG , Collins LM et al. 2011 . Replication in prevention science. Prev. Sci. 12 : 103– 17 [Google Scholar]
  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode Seeks to identify most significant items in the field No formal quality assessment. Attempts to evaluate according to contribution Typically narrative, perhaps conceptual or chronological Significant component: seeks to identify conceptual contribution to embody existing or derive new theory
Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings May or may not include comprehensive searching May or may not include quality assessment Typically narrative Analysis may be chronological, conceptual, thematic, etc.
Mapping review/ systematic map Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature Completeness of searching determined by time/scope constraints No formal quality assessment May be graphical and tabular Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research
Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses Graphical and tabular with narrative commentary Numerical analysis of measures of effect assuming absence of heterogeneity
Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other
Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics May or may not include comprehensive searching (depends whether systematic overview or not) May or may not include quality assessment (depends whether systematic overview or not) Synthesis depends on whether systematic or not. Typically narrative but may include tabular features Analysis may be chronological, conceptual, thematic, etc.
Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies May employ selective or purposive sampling Quality assessment typically used to mediate messages not for inclusion/exclusion Qualitative, narrative synthesis Thematic analysis, may include conceptual models
Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research Completeness of searching determined by time constraints Time-limited formal quality assessment Typically narrative and tabular Quantities of literature and overall quality/direction of effect of literature
Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research) Completeness of searching determined by time/scope constraints. May include research in progress No formal quality assessment Typically tabular with some narrative commentary Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review
Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives Aims for comprehensive searching of current literature No formal quality assessment Typically narrative, may have tabular accompaniment Current state of knowledge and priorities for future investigation and research
Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review Aims for exhaustive, comprehensive searching Quality assessment may determine inclusion/exclusion Typically narrative with tabular accompaniment What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research
Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’ Aims for exhaustive, comprehensive searching May or may not include quality assessment Minimal narrative, tabular summary of studies What is known; recommendations for practice. Limitations
Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment May or may not include comprehensive searching May or may not include quality assessment Typically narrative with tabular accompaniment What is known; uncertainty around findings; limitations of methodology
Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results Identification of component reviews, but no search for primary studies Quality assessment of studies within component reviews and/or of reviews themselves Graphical and tabular with narrative commentary What is known; recommendations for practice. What remains unknown; recommendations for future research
  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Jul 23, 2024 3:40 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

Penn State University Libraries

  • Home-Articles and Databases
  • Asking the clinical question
  • PICO & Finding Evidence
  • Evaluating the Evidence
  • Systematic Review vs. Literature Review
  • Fall 2024 Workshops
  • Nursing Library Instruction Course
  • Ethical & Legal Issues for Nurses
  • Useful Nursing Resources
  • Writing Resources
  • LionSearch and Finding Articles
  • The Catalog and Finding Books

Know the Difference! Systematic Review vs. Literature Review

It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic.  Even with this common ground, both types vary significantly.  Please review the following chart (and its corresponding poster linked below) for the detailed explanation of each as well as the differences between each type of review.

Systematic vs. Literature Review
Systematic Review Literature Review
Definition High-level overview of primary research on a focused question that identifies, selects, synthesizes, and appraises all high quality research evidence relevant to that question Qualitatively summarizes evidence on a topic using informal or subjective methods to collect and interpret studies
Goals Answers a focused clinical question
Eliminate bias
Provide summary or overview of topic
Question Clearly defined and answerable clinical question
Recommend using PICO as a guide
Can be a general topic or a specific question
Components Pre-specified eligibility criteria
Systematic search strategy
Assessment of the validity of findings
Interpretation and presentation of results
Reference list
Introduction
Methods
Discussion
Conclusion
Reference list
Number of Authors Three or more One or more
Timeline Months to years
Average eighteen months
Weeks to months
Requirement Thorough knowledge of topic
Perform searches of all relevant databases
Statistical analysis resources (for meta-analysis)

Understanding of topic
Perform searches of one or more databases

Value Connects practicing clinicians to high quality evidence
Supports evidence-based practice
Provides summary of literature on the topic
  • What's in a name? The difference between a Systematic Review and a Literature Review, and why it matters by Lynn Kysh, MLIS, University of Southern California - Norris Medical Library
  • << Previous: Evaluating the Evidence
  • Next: Fall 2024 Workshops >>
  • Last Updated: Aug 27, 2024 1:21 PM
  • URL: https://guides.libraries.psu.edu/nursing

FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Thought for Food Blog

What is the difference between a systematic review and a systematic literature review?

By Carol Hollier on 07-Jan-2020 12:42:03

Systematic Reviews vs Systematic Literature Reviews | IFIS Publishing

For those not immersed in systematic reviews, understanding the difference between a systematic review and a systematic literature review can be confusing.  It helps to realise that a “systematic review” is a clearly defined thing, but ambiguity creeps in around the phrase “systematic literature review” because people can and do use it in a variety of ways. 

A systematic review is a research study of research studies.  To qualify as a systematic review, a review needs to adhere to standards of transparency and reproducibility.  It will use explicit methods to identify, select, appraise, and synthesise empirical results from different but similar studies.  The study will be done in stages:  

  • In stage one, the question, which must be answerable, is framed
  • Stage two is a comprehensive literature search to identify relevant studies
  • In stage three the identified literature’s quality is scrutinised and decisions made on whether or not to include each article in the review
  • In stage four the evidence is summarised and, if the review includes a meta-analysis, the data extracted; in the final stage, findings are interpreted. [1]

Some reviews also state what degree of confidence can be placed on that answer, using the GRADE scale.  By going through these steps, a systematic review provides a broad evidence base on which to make decisions about medical interventions, regulatory policy, safety, or whatever question is analysed.   By documenting each step explicitly, the review is not only reproducible, but can be updated as more evidence on the question is generated.

Sometimes when people talk about a “systematic literature review”, they are using the phrase interchangeably with “systematic review”.  However, people can also use the phrase systematic literature review to refer to a literature review that is done in a fairly systematic way, but without the full rigor of a systematic review. 

For instance, for a systematic review, reviewers would strive to locate relevant unpublished studies in grey literature and possibly by contacting researchers directly.  Doing this is important for combatting publication bias, which is the tendency for studies with positive results to be published at a higher rate than studies with null results.  It is easy to understand how this well-documented tendency can skew a review’s findings, but someone conducting a systematic literature review in the loose sense of the phrase might, for lack of resource or capacity, forgo that step. 

Another difference might be in who is doing the research for the review. A systematic review is generally conducted by a team including an information professional for searches and a statistician for meta-analysis, along with subject experts.  Team members independently evaluate the studies being considered for inclusion in the review and compare results, adjudicating any differences of opinion.   In contrast, a systematic literature review might be conducted by one person. 

Overall, while a systematic review must comply with set standards, you would expect any review called a systematic literature review to strive to be quite comprehensive.  A systematic literature review would contrast with what is sometimes called a narrative or journalistic literature review, where the reviewer’s search strategy is not made explicit, and evidence may be cherry-picked to support an argument.

FSTA is a key tool for systematic reviews and systematic literature reviews in the sciences of food and health.

pawel-czerwinski-VkITYPupzSg-unsplash-1

The patents indexed help find results of research not otherwise publicly available because it has been done for commercial purposes.

The FSTA thesaurus will surface results that would be missed with keyword searching alone. Since the thesaurus is designed for the sciences of food and health, it is the most comprehensive for the field. 

All indexing and abstracting in FSTA is in English, so you can do your searching in English yet pick up non-English language results, and get those results translated if they meet the criteria for inclusion in a systematic review.

FSTA includes grey literature (conference proceedings) which can be difficult to find, but is important to include in comprehensive searches.

FSTA content has a deep archive. It goes back to 1969 for farm to fork research, and back to the late 1990s for food-related human nutrition literature—systematic reviews (and any literature review) should include not just the latest research but all relevant research on a question. 

You can also use FSTA to find literature reviews.

FSTA allows you to easily search for review articles (both narrative and systematic reviews) by using the subject heading or thesaurus term “REVIEWS" and an appropriate free-text keyword.

On the Web of Science or EBSCO platform, an FSTA search for reviews about cassava would look like this: DE "REVIEWS" AND cassava.

On the Ovid platform using the multi-field search option, the search would look like this: reviews.sh. AND cassava.af.

In 2011 FSTA introduced the descriptor META-ANALYSIS, making it easy to search specifically for systematic reviews that include a meta-analysis published from that year onwards.

On the EBSCO or Web of Science platform, an FSTA search for systematic reviews with meta-analyses about staphylococcus aureus would look like this: DE "META-ANALYSIS" AND staphylococcus aureus.

On the Ovid platform using the multi-field search option, the search would look like this: meta-analysis.sh. AND staphylococcus aureus.af.

Systematic reviews with meta-analyses published before 2011 are included in the REVIEWS controlled vocabulary term in the thesaurus.

An easy way to locate pre-2011 systematic reviews with meta-analyses is to search the subject heading or thesaurus term "REVIEWS" AND meta-analysis as a free-text keyword AND another appropriate free-text keyword.

On the Web of Science or EBSCO platform, the FSTA search would look like this: DE "REVIEWS" AND meta-analysis AND carbohydrate*

On the Ovid platform using the multi-field search option, the search would look like this: reviews .s h. AND meta-analysis.af. AND carbohydrate*.af.  

Related resources:

  • Literature Searching Best Practise Guide
  • Predatory publishing: Investigating researchers’ knowledge & attitudes
  • The IFIS Expert Guide to Journal Publishing

Library image by  Paul Schafer , microscope image by Matthew Waring , via Unsplash.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

Banner

Systematic Reviews

  • The Research Question
  • Inclusion and Exclusion Criteria
  • Original Studies
  • Translating
  • Deduplication
  • Project Management Tools
  • Useful Resources
  • What is not a systematic review?

Cochrane resources

Cochrane Handbook for Systematic Reviews of Interventions

systematic review and systematic literature review

  • Introduction to Systematic Reviews

Here is a PowerPoint presentation that provides a brief overview of Systematic Reviews   Texas Medical Center Library

What is a systematic review?

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a research question. 1  Systematic Reviews are research projects that provide new insight on a topic and are designed to minimize bias. The project creates accessible research that examines relevant literature, which aids decision makers by aggregating information in a systematic way. Methodological transparency, along with its systematic approach and project reproducibility, are key elements of a systematic review.

1. Taken from Lasserson TJ, Thomas J, Higgins JPT. Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors).  Cochrane Handbook for Systematic Reviews of Interventions  version 6.1 (updated September 2020). Cochrane, 2020. Available from  www.training.cochrane.org/handbook

Components of a Systematic Review

Key elements of a systematic review include :.

  • A specific and well-formulated question
  • A reproducible methodology intended to avoid bias 
  • Multiple databases searched for the review's data
  • Specified and predefined inclusion and exclusion criteria 
  • Multiple reviewers of the literature 
  • Study assessments conducted in a standardized way with definitive methodology
  • Adherence to a standardized reporting guideline, such as PRISMA

Systematic reviews can have an impact on the development of public health policies and on resource allocation decisions. They can inform clinical practices and implement evidence-based interventions for diseases and illnesses. Moreover, systematic reviews can compare benefits and harms of treatment options.

The systematic review process has been developed to minimize bias   and ensure transparency. Methods should be adequately documented so that they can be replicated. The integrity of a systematic review is based on its transparency and reproducibility of the methods used for the review. 

There are many resources on how to conduct, organize, and publish a systematic review. This guide is by no means exhaustive; its aim is to provide a starting place for understanding the core of what a systematic review is and how to conduct one.

What does it take to do a systematic review?

Time :  On average, systematic reviews can require up to 18 months of preparation. 

A team:  A systematic review can't be done alone! You need to work with subject experts to clarify issues related to the topic; librarians to develop comprehensive search strategies and identify appropriate databases; reviewers to screen abstracts and read the full text; a statistician who can assist with data analysis; and a project leader to coordinate the team and movement of data.

A clearly defined question : A clearly defined research question can help clarify the key concepts of a systematic review and explain the rationale for the review. It is recommended to use a framework (e.g. the PICO framework) to identify key concepts of the question.

A written protocol :  The protocol should outline the study methodology. The protocol should include the rationale for the systematic review; the research question broken into PICO components; explicit inclusion/exclusion criteria; relevant known literature on the research question; preliminary search terms and databases to be used; intended data abstraction/data management tools; and other components that may be unique to register the protocol. 

A registered protocol :   A few recommendations are  PROSPERO , an International Prospective Register of Systematic Reviews; Cochrane; and the Agency for Healthcare Research and Quality. Registering a protocol is important because it reduces duplication of effort and promotes transparency. 

Inclusion/exclusion criteria: Inclusion/exclusion criteria can help researchers define the terms of the investigation. These will include  the predefined question; study types; study-analysis criteria (i.e. criteria for reporting bias within studies); and quantitative methods to be used for any statistical analysis. 

Comprehensive literature searches :  Identify appropriate databases and conduct comprehensive and detailed literature searches that can be documented and duplicated. Cochrane recommends that 3+ different databases be used to conduct the searches. A strategy must be developed and then translated across the multiple pre-specified databases, preferably by an information specialist.  

Citation management:  You should have working knowledge of EndNote or another citation management system that will be accessible to the research team to help manage citations retrieved from literature searches.

Follow reporting guidelines :  Use appropriate guidelines for reporting your review for publication.

Time -  requires about 18 months of preparation .

  The suggested timeline for a Cochrane review is: 

  • Preparation of protocol  1 – 2 months
  • Searches for published and unpublished studies  3-8 months
  • Pilot test of eligibility criteria  2-3 months
  • Inclusion assessments  3-8 months
  • Pilot test of ‘Risk of bias’ assessment  3 months
  • Validity assessments  3-10 months
  • Pilot test of data collection  3 months
  • Data collection  3-10 months
  • Data entry  3-10 months
  • Follow up of missing information  5-11 months
  • Analysis  8-10 months
  • Preparation of review report  1-11 months
  • Keeping the review up-to-date  12 months
  • Next: Services >>
  • Last Updated: Jul 29, 2024 2:41 PM
  • URL: https://libguides.sph.uth.tmc.edu/SystematicReviews
  • Open access
  • Published: 08 June 2023

Guidance to best tools and practices for systematic reviews

  • Kat Kolaski 1 ,
  • Lynne Romeiser Logan 2 &
  • John P. A. Ioannidis 3  

Systematic Reviews volume  12 , Article number:  96 ( 2023 ) Cite this article

24k Accesses

24 Citations

76 Altmetric

Metrics details

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 , 5 , 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 , 19 , 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 , 24 , 25 , 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 , 33 , 34 , 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 , 45 , 46 , 47 , 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 , 50 , 51 , 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 , 56 , 57 , 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

figure 1

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 , 85 , 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 , 100 , 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 , 104 , 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 , 150 , 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 , 152 , 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 , 166 , 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 , 185 , 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Muka T, Glisic M, Milic J, Verhoog S, Bohlius J, Bramer W, et al. A 24-step guide on how to design, conduct, and successfully publish a systematic review and meta-analysis in medical research. Eur J Epidemiol. 2020;35(1):49–60.

Article   PubMed   Google Scholar  

Thomas J, McDonald S, Noel-Storr A, Shemilt I, Elliott J, Mavergames C, et al. Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for cochrane reviews. J Clin Epidemiol. 2021;133:140–51.

Article   PubMed   PubMed Central   Google Scholar  

Fontelo P, Liu F. A review of recent publication trends from top publishing countries. Syst Rev. 2018;7(1):147.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:1–7.

Article   Google Scholar  

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358: j4008.

Goldkuhle M, Narayan VM, Weigl A, Dahm P, Skoetz N. A systematic assessment of Cochrane reviews and systematic reviews published in high-impact medical journals related to cancer. BMJ Open. 2018;8(3): e020869.

Ho RS, Wu X, Yuan J, Liu S, Lai X, Wong SY, et al. Methodological quality of meta-analyses on treatments for chronic obstructive pulmonary disease: a cross-sectional study using the AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. NPJ Prim Care Respir Med. 2015;25:14102.

Tsoi AKN, Ho LTF, Wu IXY, Wong CHL, Ho RST, Lim JYY, et al. Methodological quality of systematic reviews on treatments for osteoporosis: a cross-sectional study. Bone. 2020;139(June): 115541.

Arienti C, Lazzarini SG, Pollock A, Negrini S. Rehabilitation interventions for improving balance following stroke: an overview of systematic reviews. PLoS ONE. 2019;14(7):1–23.

Kolaski K, Romeiser Logan L, Goss KD, Butler C. Quality appraisal of systematic reviews of interventions for children with cerebral palsy reveals critically low confidence. Dev Med Child Neurol. 2021;63(11):1316–26.

Almeida MO, Yamato TP, Parreira PCS, do Costa LOP, Kamper S, Saragiotto BT. Overall confidence in the results of systematic reviews on exercise therapy for chronic low back pain: a cross-sectional analysis using the Assessing the Methodological Quality of Systematic Reviews (AMSTAR) 2 tool. Braz J Phys Ther. 2020;24(2):103–17.

Mayo-Wilson E, Ng SM, Chuck RS, Li T. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews. BMC Ophthalmol. 2017;17(1):1–10.

Matthias K, Rissling O, Pieper D, Morche J, Nocon M, Jacobs A, et al. The methodological quality of systematic reviews on the treatment of adult major depression needs improvement according to AMSTAR 2: a cross-sectional study. Heliyon. 2020;6(9): e04776.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Churuangsuk C, Kherouf M, Combet E, Lean M. Low-carbohydrate diets for overweight and obesity: a systematic review of the systematic reviews. Obes Rev. 2018;19(12):1700–18.

Article   CAS   PubMed   Google Scholar  

Storman M, Storman D, Jasinska KW, Swierz MJ, Bala MM. The quality of systematic reviews/meta-analyses published in the field of bariatrics: a cross-sectional systematic survey using AMSTAR 2 and ROBIS. Obes Rev. 2020;21(5):1–11.

Franco JVA, Arancibia M, Meza N, Madrid E, Kopitowski K. [Clinical practice guidelines: concepts, limitations and challenges]. Medwave. 2020;20(3):e7887 ([Spanish]).

Brito JP, Tsapas A, Griebeler ML, Wang Z, Prutsky GJ, Domecq JP, et al. Systematic reviews supporting practice guideline recommendations lack protection against bias. J Clin Epidemiol. 2013;66(6):633–8.

Zhou Q, Wang Z, Shi Q, Zhao S, Xun Y, Liu H, et al. Clinical epidemiology in China series. Paper 4: the reporting and methodological quality of Chinese clinical practice guidelines published between 2014 and 2018: a systematic review. J Clin Epidemiol. 2021;140:189–99.

Lunny C, Ramasubbu C, Puil L, Liu T, Gerrish S, Salzwedel DM, et al. Over half of clinical practice guidelines use non-systematic methods to inform recommendations: a methods study. PLoS ONE. 2021;16(4):1–21.

Faber T, Ravaud P, Riveros C, Perrodeau E, Dechartres A. Meta-analyses including non-randomized studies of therapeutic interventions: a methodological review. BMC Med Res Methodol. 2016;16(1):1–26.

Ioannidis JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.

Møller MH, Ioannidis JPA, Darmon M. Are systematic reviews and meta-analyses still useful research? We are not sure. Intensive Care Med. 2018;44(4):518–20.

Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt PMM, Korevaar DA, et al. Increasing value and reducing waste in biomedical research: who’s listening? Lancet. 2016;387(10027):1573–86.

Barnard ND, Willet WC, Ding EL. The misuse of meta-analysis in nutrition research. JAMA. 2017;318(15):1435–6.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction - GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):1–31.

World Health Organization. WHO handbook for guideline development, 2nd edn. WHO; 2014. Available from: https://www.who.int/publications/i/item/9789241548960 . Cited 2022 Jan 20

Higgins J, Lasserson T, Chandler J, Tovey D, Thomas J, Flemying E, et al. Methodological expectations of Cochrane intervention reviews. Cochrane; 2022. Available from: https://community.cochrane.org/mecir-manual/key-points-and-introduction . Cited 2022 Jul 19

Cumpston M, Chandler J. Chapter II: Planning a Cochrane review. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook . Cited 2022 Jan 30

Henderson LK, Craig JC, Willis NS, Tovey D, Webster AC. How to write a cochrane systematic review. Nephrology. 2010;15(6):617–24.

Page MJ, Altman DG, Shamseer L, McKenzie JE, Ahmadzai N, Wolfe D, et al. Reproducible research practices are underused in systematic reviews of biomedical interventions. J Clin Epidemiol. 2018;94:8–18.

Lorenz RC, Matthias K, Pieper D, Wegewitz U, Morche J, Nocon M, et al. AMSTAR 2 overall confidence rating: lacking discriminating capacity or requirement of high methodological quality? J Clin Epidemiol. 2020;119:142–4.

Posadzki P, Pieper D, Bajpai R, Makaruk H, Könsgen N, Neuhaus AL, et al. Exercise/physical activity and health outcomes: an overview of Cochrane systematic reviews. BMC Public Health. 2020;20(1):1–12.

Wells G, Shea B, O’Connell D, Peterson J, Welch V, Losos M. The Newcastile-Ottawa Scale (NOS) for assessing the quality of nonrandomized studies in meta-analyses. The Ottawa Hospital; 2009. Available from: https://www.ohri.ca/programs/clinical_epidemiology/oxford.asp . Cited 2022 Jul 19

Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25(9):603–5.

Stang A, Jonas S, Poole C. Case study in major quotation errors: a critical commentary on the Newcastle-Ottawa scale. Eur J Epidemiol. 2018;33(11):1025–31.

Ioannidis JPA. Massive citations to misleading methods and research tools: Matthew effect, quotation error and citation copying. Eur J Epidemiol. 2018;33(11):1021–3.

Khalil H, Ameen D, Zarnegar A. Tools to support the automation of systematic reviews: a scoping review. J Clin Epidemiol. 2022;144:22–42.

Crequit P, Boutron I, Meerpohl J, Williams H, Craig J, Ravaud P. Future of evidence ecosystem series: 2. Current opportunities and need for better tools and methods. J Clin Epidemiol. 2020;123:143–52.

Shemilt I, Noel-Storr A, Thomas J, Featherstone R, Mavergames C. Machine learning reduced workload for the cochrane COVID-19 study register: development and evaluation of the cochrane COVID-19 study classifier. Syst Rev. 2022;11(1):15.

Nguyen P-Y, Kanukula R, McKensie J, Alqaidoom Z, Brennan SE, Haddaway N, et al. Changing patterns in reporting and sharing of review data in systematic reviews with meta-analysis of the effects of interventions: a meta-research study. medRxiv; 2022 Available from: https://doi.org/10.1101/2022.04.11.22273688v3 . Cited 2022 Nov 18

Afshari A, Møller MH. Broken science and the failure of academics—resignation or reaction? Acta Anaesthesiol Scand. 2018;62(8):1038–40.

Butler E, Granholm A, Aneman A. Trustworthy systematic reviews–can journals do more? Acta Anaesthesiol Scand. 2019;63(4):558–9.

Negrini S, Côté P, Kiekens C. Methodological quality of systematic reviews on interventions for children with cerebral palsy: the evidence pyramid paradox. Dev Med Child Neurol. 2021;63(11):1244–5.

Page MJ, Moher D. Mass production of systematic reviews and meta-analyses: an exercise in mega-silliness? Milbank Q. 2016;94(3):515–9.

Clarke M, Chalmers I. Reflections on the history of systematic reviews. BMJ Evid Based Med. 2018;23(4):121–2.

Alnemer A, Khalid M, Alhuzaim W, Alnemer A, Ahmed B, Alharbi B, et al. Are health-related tweets evidence based? Review and analysis of health-related tweets on twitter. J Med Internet Res. 2015;17(10): e246.

PubMed   PubMed Central   Google Scholar  

Haber N, Smith ER, Moscoe E, Andrews K, Audy R, Bell W, et al. Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): a systematic review. PLoS ONE. 2018;13(5): e196346.

Swetland SB, Rothrock AN, Andris H, Davis B, Nguyen L, Davis P, et al. Accuracy of health-related information regarding COVID-19 on Twitter during a global pandemic. World Med Heal Policy. 2021;13(3):503–17.

Nascimento DP, Almeida MO, Scola LFC, Vanin AA, Oliveira LA, Costa LCM, et al. Letter to the editor – not even the top general medical journals are free of spin: a wake-up call based on an overview of reviews. J Clin Epidemiol. 2021;139:232–4.

Ioannidis JPA, Fanelli D, Dunne DD, Goodman SN. Meta-research: evaluation and improvement of research methods and practices. PLoS Biol. 2015;13(10):1–7.

Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol. 2018;18(1):1–9.

Pollock M, Fernandez R, Becker LA, Pieper D, Hartling L. Chapter V: overviews of reviews. Cochrane handbook for systematic reviews of interventions. In:  Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/current/chapter-v . Cited 2022 Mar 7

Tricco AC, Lillie E, Zarin W, O’Brien K, Colquhoun H, Kastner M, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):1–10.

Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, et al. Cochrane rapid reviews methods group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–22.

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, et al. Living systematic review: 1. Introduction—the why, what, when, and how. J Clin Epidemiol. 2017;91:23–30.

Higgins JPT, Thomas J, Chandler J. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook . Cited 2022 Jan 25

Aromataris E, Munn Z. JBI Manual for Evidence Synthesis [internet]. JBI; 2020 [cited 2022 Jan 15]. Available from: https://synthesismanual.jbi.global .

Tufanaru C, Munn Z, Aromartaris E, Campbell J, Hopp L. Chapter 3: Systematic reviews of effectiveness. In Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis [internet]. JBI; 2020 [cited 2022 Jan 25]. Available from: https://synthesismanual.jbi.global .

Leeflang MMG, Davenport C, Bossuyt PM. Defining the review question. In: Deeks JJ, Bossuyt PM, Leeflang MMG, Takwoingi Y, editors. Cochrane handbook for systematic reviews of diagnostic test accuracy [internet]. Cochrane; 2022 [cited 2022 Mar 30]. Available from: https://training.cochrane.org/6-defining-review-question .

Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, et al.Qualitative evidence. In: Higgins J, Tomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions [internet]. Cochrane; 2022 [cited 2022 Mar 30]. Available from: https://training.cochrane.org/handbook/current/chapter-21#section-21-5 .

Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, et al. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis [internet]. JBI; 2020 [cited 2022 Jul 11]. Available from: https://synthesismanual.jbi.global .

Debray TPA, Damen JAAG, Snell KIE, Ensor J, Hooft L, Reitsma JB, et al. A guide to systematic review and meta-analysis of prediction model performance. BMJ. 2017;356:i6460.

Moola S, Munn Z, Tufanaru C, Aromartaris E, Sears K, Sfetcu R, et al. Systematic reviews of etiology and risk. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis [internet]. JBI; 2020 [cited 2022 Mar 30]. Available from: https://synthesismanual.jbi.global/ .

Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res. 2010;19(4):539–49.

Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.

Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Chapter 5: Systematic reviews of prevalence and incidence. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis [internet]. JBI; 2020 [cited 2022 Mar 30]. Available from: https://synthesismanual.jbi.global/ .

Centre for Evidence-Based Medicine. Study designs. CEBM; 2016. Available from: https://www.cebm.ox.ac.uk/resources/ebm-tools/study-designs . Cited 2022 Aug 30

Hartling L, Bond K, Santaguida PL, Viswanathan M, Dryden DM. Testing a tool for the classification of study designs in systematic reviews of interventions and exposures showed moderate reliability and low accuracy. J Clin Epidemiol. 2011;64(8):861–71.

Crowe M, Sheppard L, Campbell A. Reliability analysis for a proposed critical appraisal tool demonstrated value for diverse research designs. J Clin Epidemiol. 2012;65(4):375–83.

Reeves BC, Wells GA, Waddington H. Quasi-experimental study designs series—paper 5: a checklist for classifying studies evaluating the effects on health interventions—a taxonomy without labels. J Clin Epidemiol. 2017;89:30–42.

Reeves BC, Deeks JJ, Higgins JPT, Shea B, Tugwell P, Wells GA. Chapter 24: including non-randomized studies on intervention effects.  In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/current/chapter-24 . Cited 2022 Mar 1

Reeves B. A framework for classifying study designs to evaluate health care interventions. Forsch Komplementarmed Kl Naturheilkd. 2004;11(Suppl 1):13–7.

Google Scholar  

Rockers PC, Røttingen J, Shemilt I. Inclusion of quasi-experimental studies in systematic reviews of health systems research. Health Policy. 2015;119(4):511–21.

Mathes T, Pieper D. Clarifying the distinction between case series and cohort studies in systematic reviews of comparative studies: potential impact on body of evidence and workload. BMC Med Res Methodol. 2017;17(1):8–13.

Jhangiani R, Cuttler C, Leighton D. Single subject research. In: Jhangiani R, Cuttler C, Leighton D, editors. Research methods in psychology, 4th edn. Pressbooks KPU; 2019. Available from: https://kpu.pressbooks.pub/psychmethods4e/part/single-subject-research/ . Cited 2022 Aug 15

Higgins JP, Ramsay C, Reeves BC, Deeks JJ, Shea B, Valentine JC, et al. Issues relating to study design and risk of bias when including non-randomized studies in systematic reviews on the effects of interventions. Res Synth Methods. 2013;4(1):12–25.

Cumpston M, Lasserson T, Chandler J, Page M. 3.4.1 Criteria for considering studies for this review, Chapter III: Reporting the review. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/current/chapter-iii#section-iii-3-4-1 . Cited 2022 Oct 12

Kooistra B, Dijkman B, Einhorn TA, Bhandari M. How to design a good case series. J Bone Jt Surg. 2009;91(Suppl 3):21–6.

Murad MH, Sultan S, Haffar S, Bazerbachi F. Methodological quality and synthesis of case series and case reports. Evid Based Med. 2018;23(2):60–3.

Robinson K, Chou R, Berkman N, Newberry S, FU R, Hartling L, et al. Methods guide for comparative effectiveness reviews integrating bodies of evidence: existing systematic reviews and primary studies. AHRQ; 2015. Available from: https://archive.org/details/integrating-evidence-report-150226 . Cited 2022 Aug 7

Tugwell P, Welch VA, Karunananthan S, Maxwell LJ, Akl EA, Avey MT, et al. When to replicate systematic reviews of interventions: consensus checklist. BMJ. 2020;370: m2864.

Tsertsvadze A, Maglione M, Chou R, Garritty C, Coleman C, Lux L, et al. Updating comparative effectiveness reviews:current efforts in AHRQ’s effective health care program. J Clin Epidemiol. 2011;64(11):1208–15.

Cumpston M, Chandler J. Chapter IV: Updating a review. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook . Cited 2022 Aug 2

Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. A decision tool to help researchers make decisions about including systematic reviews in overviews of reviews of healthcare interventions. Syst Rev. 2019;8(1):1–8.

Pussegoda K, Turner L, Garritty C, Mayhew A, Skidmore B, Stevens A, et al. Identifying approaches for assessing methodological and reporting quality of systematic reviews: a descriptive study. Syst Rev. 2017;6(1):1–12.

Bhaumik S. Use of evidence for clinical practice guideline development. Trop Parasitol. 2017;7(2):65–71.

Moher D, Eastwood S, Olkin I, Drummond R, Stroup D. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Lancet. 1999;354:1896–900.

Stroup D, Berlin J, Morton S, Olkin I, Williamson G, Rennie D, et al. Meta-analysis of observational studies in epidemiology A proposal for reporting. JAMA. 2000;238(15):2008–12.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62(10):1006–12.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372: n71.

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44(11):1271–8.

Centre for Evidence-Based Medicine. Critical appraisal tools. CEBM; 2015. Available from: https://www.cebm.ox.ac.uk/resources/ebm-tools/critical-appraisal-tools . Cited 2022 Apr 10

Page MJ, McKenzie JE, Higgins JPT. Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review. BMJ Open. 2018;8(3):1–16.

Article   CAS   Google Scholar  

Ma LL, Wang YY, Yang ZH, Huang D, Weng H, Zeng XT. Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better? Mil Med Res. 2020;7(1):1–11.

Banzi R, Cinquini M, Gonzalez-Lorenzo M, Pecoraro V, Capobussi M, Minozzi S. Quality assessment versus risk of bias in systematic reviews: AMSTAR and ROBIS had similar reliability but differed in their construct and applicability. J Clin Epidemiol. 2018;99:24–32.

Swierz MJ, Storman D, Zajac J, Koperny M, Weglarz P, Staskiewicz W, et al. Similarities, reliability and gaps in assessing the quality of conduct of systematic reviews using AMSTAR-2 and ROBIS: systematic survey of nutrition reviews. BMC Med Res Methodol. 2021;21(1):1–10.

Pieper D, Puljak L, González-Lorenzo M, Minozzi S. Minor differences were found between AMSTAR 2 and ROBIS in the assessment of systematic reviews including both randomized and nonrandomized studies. J Clin Epidemiol. 2019;108:26–33.

Lorenz RC, Matthias K, Pieper D, Wegewitz U, Morche J, Nocon M, et al. A psychometric study found AMSTAR 2 to be a valid and moderately reliable appraisal tool. J Clin Epidemiol. 2019;114:133–40.

Leclercq V, Hiligsmann M, Parisi G, Beaudart C, Tirelli E, Bruyère O. Best-worst scaling identified adequate statistical methods and literature search as the most important items of AMSTAR2 (A measurement tool to assess systematic reviews). J Clin Epidemiol. 2020;128:74–82.

Bühn S, Mathes T, Prengel P, Wegewitz U, Ostermann T, Robens S, et al. The risk of bias in systematic reviews tool showed fair reliability and good construct validity. J Clin Epidemiol. 2017;91:121–8.

Gates M, Gates A, Duarte G, Cary M, Becker M, Prediger B, et al. Quality and risk of bias appraisals of systematic reviews are inconsistent across reviewers and centers. J Clin Epidemiol. 2020;125:9–15.

Perry R, Whitmarsh A, Leach V, Davies P. A comparison of two assessment tools used in overviews of systematic reviews: ROBIS versus AMSTAR-2. Syst Rev. 2021;10(1):273.

Gates M, Gates A, Guitard S, Pollock M, Hartling L. Guidance for overviews of reviews continues to accumulate, but important challenges remain: a scoping review. Syst Rev. 2020;9(1):1–19.

Aromataris E, Fernandez R, Godfrey C, Holly C, Khalil H, Tungpunkom P. Chapter 10: umbrella reviews. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI; 2020. Available from: https://synthesismanual.jbi.global . Cited 2022 Jul 11

Pieper D, Lorenz RC, Rombey T, Jacobs A, Rissling O, Freitag S, et al. Authors should clearly report how they derived the overall rating when applying AMSTAR 2—a cross-sectional study. J Clin Epidemiol. 2021;129:97–103.

Franco JVA, Meza N. Authors should also report the support for judgment when applying AMSTAR 2. J Clin Epidemiol. 2021;138:240.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7): e1000100.

Page MJ, Moher D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement and extensions: a scoping review. Syst Rev. 2017;6(1):263.

Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372: n160.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement. J Clin Epidemiol. 2021;134:103–12.

Welch V, Petticrew M, Petkovic J, Moher D, Waters E, White H, et al. Extending the PRISMA statement to equity-focused systematic reviews (PRISMA-E 2012): explanation and elaboration. J Clin Epidemiol. 2016;70:68–89.

Beller EM, Glasziou PP, Altman DG, Hopewell S, Bastian H, Chalmers I, et al. PRISMA for abstracts: reporting systematic reviews in journal and conference abstracts. PLoS Med. 2013;10(4): e1001419.

Moher D, Shamseer L, Clarke M. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84.

Stewart LA, Clarke M, Rovers M, Riley RD, Simmonds M, Stewart G, et al. Preferred reporting items for a systematic review and meta-analysis of individual participant data: The PRISMA-IPD statement. JAMA. 2015;313(16):1657–65.

Zorzela L, Loke YK, Ioannidis JP, Golder S, Santaguida P, Altman DG, et al. PRISMA harms checklist: Improving harms reporting in systematic reviews. BMJ. 2016;352: i157.

McInnes MDF, Moher D, Thombs BD, McGrath TA, Bossuyt PM, Clifford T, et al. Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy studies The PRISMA-DTA statement. JAMA. 2018;319(4):388–96.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Wang X, Chen Y, Liu Y, Yao L, Estill J, Bian Z, et al. Reporting items for systematic reviews and meta-analyses of acupuncture: the PRISMA for acupuncture checklist. BMC Complement Altern Med. 2019;19(1):1–10.

Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, et al. PRISMA-S: An extension to the PRISMA statement for reporting literature searches in systematic reviews. J Med Libr Assoc. 2021;109(2):174–200.

Blanco D, Altman D, Moher D, Boutron I, Kirkham JJ, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open. 2019;9(5): e26589.

Koster TM, Wetterslev J, Gluud C, Keus F, van der Horst ICC. Systematic overview and critical appraisal of meta-analyses of interventions in intensive care medicine. Acta Anaesthesiol Scand. 2018;62(8):1041–9.

Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51.

Pollock A, Berge E. How to do a systematic review. Int J Stroke. 2018;13(2):138–56.

Gagnier JJ, Kellam PJ. Reporting and methodological quality of systematic reviews in the orthopaedic literature. J Bone Jt Surg. 2013;95(11):1–7.

Martinez-Monedero R, Danielian A, Angajala V, Dinalo JE, Kezirian EJ. Methodological quality of systematic reviews and meta-analyses published in high-impact otolaryngology journals. Otolaryngol Head Neck Surg. 2020;163(5):892–905.

Boutron I, Crequit P, Williams H, Meerpohl J, Craig J, Ravaud P. Future of evidence ecosystem series 1. Introduction-evidence synthesis ecosystem needs dramatic change. J Clin Epidemiol. 2020;123:135–42.

Ioannidis JPA, Bhattacharya S, Evers JLH, Der Veen F, Van SE, Barratt CLR, et al. Protect us from poor-quality medical research. Hum Reprod. 2018;33(5):770–6.

Lasserson T, Thomas J, Higgins J. Section 1.5 Protocol development, Chapter 1: Starting a review. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/archive/v6/chapter-01#section-1-5 . Cited 2022 Mar 20

Stewart L, Moher D, Shekelle P. Why prospective registration of systematic reviews makes sense. Syst Rev. 2012;1(1):7–10.

Allers K, Hoffmann F, Mathes T, Pieper D. Systematic reviews with published protocols compared to those without: more effort, older search. J Clin Epidemiol. 2018;95:102–10.

Ge L, Tian J, Li Y, Pan J, Li G, Wei D, et al. Association between prospective registration and overall reporting and methodological quality of systematic reviews: a meta-epidemiological study. J Clin Epidemiol. 2018;93:45–55.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;350: g7647.

Pieper D, Rombey T. Where to prospectively register a systematic review. Syst Rev. 2022;11(1):8.

PROSPERO. PROSPERO will require earlier registration. NIHR; 2022. Available from: https://www.crd.york.ac.uk/prospero/ . Cited 2022 Mar 20

Kirkham JJ, Altman DG, Williamson PR. Bias due to changes in specified outcomes during the systematic review process. PLoS ONE. 2010;5(3):3–7.

Victora CG, Habicht JP, Bryce J. Evidence-based public health: moving beyond randomized trials. Am J Public Health. 2004;94(3):400–5.

Peinemann F, Kleijnen J. Development of an algorithm to provide awareness in choosing study designs for inclusion in systematic reviews of healthcare interventions: a method study. BMJ Open. 2015;5(8): e007540.

Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350: h2147.

Junqueira DR, Phillips R, Zorzela L, Golder S, Loke Y, Moher D, et al. Time to improve the reporting of harms in randomized controlled trials. J Clin Epidemiol. 2021;136:216–20.

Hemkens LG, Contopoulos-Ioannidis DG, Ioannidis JPA. Routinely collected data and comparative effectiveness evidence: promises and limitations. CMAJ. 2016;188(8):E158–64.

Murad MH. Clinical practice guidelines: a primer on development and dissemination. Mayo Clin Proc. 2017;92(3):423–33.

Abdelhamid AS, Loke YK, Parekh-Bhurke S, Chen Y-F, Sutton A, Eastwood A, et al. Use of indirect comparison methods in systematic reviews: a survey of cochrane review authors. Res Synth Methods. 2012;3(2):71–9.

Jüni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol. 2002;31(1):115–23.

Vickers A, Goyal N, Harland R, Rees R. Do certain countries produce only positive results? A systematic review of controlled trials. Control Clin Trials. 1998;19(2):159–66.

Jones CW, Keil LG, Weaver MA, Platts-Mills TF. Clinical trials registries are under-utilized in the conduct of systematic reviews: a cross-sectional analysis. Syst Rev. 2014;3(1):1–7.

Baudard M, Yavchitz A, Ravaud P, Perrodeau E, Boutron I. Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: methodological systematic review and reanalysis of meta-analyses. BMJ. 2017;356: j448.

Fanelli D, Costas R, Ioannidis JPA. Meta-assessment of bias in science. Proc Natl Acad Sci USA. 2017;114(14):3714–9.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007;2:MR000010.

Shojania K, Sampson M, Ansari MT, Ji J, Garritty C, Radar T, et al. Updating systematic reviews. AHRQ Technical Reviews. 2007: Report 07–0087.

Tate RL, Perdices M, Rosenkoetter U, Wakim D, Godbee K, Togher L, et al. Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: The 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale. Neuropsychol Rehabil. 2013;23(5):619–38.

Tate RL, Perdices M, McDonald S, Togher L, Rosenkoetter U. The design, conduct and report of single-case research: Resources to improve the quality of the neurorehabilitation literature. Neuropsychol Rehabil. 2014;24(3–4):315–31.

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366: l4894.

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355: i4919.

Igelström E, Campbell M, Craig P, Katikireddi SV. Cochrane’s risk of bias tool for non-randomized studies (ROBINS-I) is frequently misapplied: a methodological systematic review. J Clin Epidemiol. 2021;140:22–32.

McKenzie JE, Brennan SE. Chapter 12: Synthesizing and presenting findings using other methods. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/current/chapter-12 . Cited 2022 Apr 10

Ioannidis J, Patsopoulos N, Rothstein H. Reasons or excuses for avoiding meta-analysis in forest plots. BMJ. 2008;336(7658):1413–5.

Stewart LA, Tierney JF. To IPD or not to IPD? Eval Health Prof. 2002;25(1):76–97.

Tierney JF, Stewart LA, Clarke M. Chapter 26: Individual participant data. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/current/chapter-26 . Cited 2022 Oct 12

Chaimani A, Caldwell D, Li T, Higgins J, Salanti G. Chapter 11: Undertaking network meta-analyses. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook . Cited 2022 Oct 12.

Cooper H, Hedges L, Valentine J. The handbook of research synthesis and meta-analysis. 3rd ed. Russell Sage Foundation; 2019.

Sutton AJ, Abrams KR, Jones DR, Sheldon T, Song F. Methods for meta-analysis in medical research. Methods for meta-analysis in medical research; 2000.

Deeks J, Higgins JPT, Altman DG. Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic review of interventions. Cochrane; 2022. Available from: http://www.training.cochrane.org/handbook . Cited 2022 Mar 20.

Clarke MJ. Individual patient data meta-analyses. Best Pract Res Clin Obstet Gynaecol. 2005;19(1):47–55.

Catalá-López F, Tobías A, Cameron C, Moher D, Hutton B. Network meta-analysis for comparing treatment effects of multiple interventions: an introduction. Rheumatol Int. 2014;34(11):1489–96.

Debray T, Schuit E, Efthimiou O, Reitsma J, Ioannidis J, Salanti G, et al. An overview of methods for network meta-analysis using individual participant data: when do benefits arise? Stat Methods Med Res. 2016;27(5):1351–64.

Tonin FS, Rotta I, Mendes AM, Pontarolo R. Network meta-analysis : a technique to gather evidence from direct and indirect comparisons. Pharm Pract (Granada). 2017;15(1):943.

Tierney JF, Vale C, Riley R, Smith CT, Stewart L, Clarke M, et al. Individual participant data (IPD) metaanalyses of randomised controlled trials: guidance on their use. PLoS Med. 2015;12(7): e1001855.

Rouse B, Chaimani A, Li T. Network meta-analysis: an introduction for clinicians. Intern Emerg Med. 2017;12(1):103–11.

Cochrane Training. Review Manager RevMan Web. Cochrane; 2022. Available from: https://training.cochrane.org/online-learning/core-software/revman . Cited 2022 Jun 24

MetaXL. MetalXL. Epi Gear; 2016. Available from: http://epigear.com/index_files/metaxl.html . Cited 2022 Jun 24.

JBI. JBI SUMARI. JBI; 2019. Available from: https://sumari.jbi.global/ . Cited 2022 Jun 24.

Ryan R. Cochrane Consumers and Communication Review Group: data synthesis and analysis. Cochrane Consumers and Communication Review Group; 2013. Available from: http://cccrg.cochrane.org . Cited 2022 Jun 24

McKenzie JE, Beller EM, Forbes AB. Introduction to systematic reviews and meta-analysis. Respirology. 2016;21(4):626–37.

Campbell M, Katikireddi SV, Sowden A, Thomson H. Lack of transparency in reporting narrative synthesis of quantitative data: a methodological assessment of systematic reviews. J Clin Epidemiol. 2019;105:1–9.

Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368: l6890.

McKenzie JE, Brennan S, Ryan R. Summarizing study characteristics and preparing for synthesis. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook . Cited 2022 Oct 12

AHRQ. Systems to rate the strength of scientific evidence. Evidence report/technology assessment no. 47. AHRQ; 2002. Available from: https://archive.ahrq.gov/clinic/epcsums/strengthsum.htm . Cited 2022 Apr 10.

Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, Hill S, et al. Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches. BMC Health Serv Res. 2004;4(1):38.

Ioannidis JPA. Meta-research: the art of getting it wrong.  Res Synth Methods. 2010;1(3–4):169–84.

Lai NM, Teng CL, Lee ML. Interpreting systematic reviews:  are we ready to make our own conclusions? A cross sectional study. BMC Med. 2011;9(1):30.

Glenton C, Santesso N, Rosenbaum S, Nilsen ES, Rader T, Ciapponi A, et al. Presenting the results of Cochrane systematic reviews to a consumer audience: a qualitative study. Med Decis Making. 2010;30(5):566–77.

Yavchitz A, Ravaud P, Altman DG, Moher D, HrobjartssonA, Lasserson T, et al. A new classification of spin in systematic reviews and meta-analyses was developed and ranked according to the severity. J Clin Epidemiol. 2016;75:56–65.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ. 2004;328:7454.

GRADE Working Group. Organizations. GRADE; 2022 [cited 2023 May 2].  Available from: www.gradeworkinggroup.org .

Hartling L, Fernandes RM, Seida J, Vandermeer B, Dryden DM. From the trenches: a cross-sectional study applying the grade tool in systematic reviews of healthcare interventions.  PLoS One. 2012;7(4):e34697.

Hultcrantz M, Rind D, Akl EA, Treweek S, Mustafa RA, Iorio A, et al. The GRADE working group clarifies the construct of certainty of evidence. J Clin Epidemiol. 2017;87:4–13.

Schünemann H, Brozek J, Guyatt G, Oxman AD, Editors. Section 6.3.2. Symbolic representation. GRADE Handbook [internet].  GRADE; 2013 [cited 2022 Jan 27]. Available from: https://gdt.gradepro.org/app/handbook/handbook.html#h.lr8e9vq954 .

Siemieniuk R, Guyatt G What is GRADE? [internet] BMJ Best Practice; 2017 [cited 2022 Jul 20]. Available from: https://bestpractice.bmj.com/info/toolkit/learn-ebm/what-is-grade/ .

Guyatt G, Oxman AD, Sultan S, Brozek J, Glasziou P, Alonso-Coello P, et al. GRADE guidelines: 11. Making an overall rating of confidence in effect estimates for a single outcome and for all outcomes. J Clin Epidemiol. 2013;66(2):151–7.

Guyatt GH, Oxman AD, Sultan S, Glasziou P, Akl EA, Alonso-Coello P, et al. GRADE guidelines: 9. Rating up the quality of evidence. J Clin Epidemiol. 2011;64(12):1311–6.

Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, et al. GRADE guidelines: 4. Rating the quality of evidence - Study limitations (risk of bias). J Clin Epidemiol. 2011;64(4):407–15.

Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, et al. GRADE guidelines 6. Rating the quality of evidence - Imprecision. J Clin Epidemiol. 2011;64(12):1283–93.

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, et al. GRADE guidelines: 7. Rating the quality of evidence - Inconsistency. J Clin Epidemiol. 2011;64(12):1294–302.

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, et al. GRADE guidelines: 8. Rating the quality of evidence - Indirectness. J Clin Epidemiol. 2011;64(12):1303–10.

Guyatt GH, Oxman AD, Montori V, Vist G, Kunz R, Brozek J, et al. GRADE guidelines: 5. Rating the quality of evidence - Publication bias. J Clin Epidemiol. 2011;64(12):1277–82.

Andrews JC, Schünemann HJ, Oxman AD, Pottie K, Meerpohl JJ, Coello PA, et al. GRADE guidelines: 15. Going from evidence to recommendation - Determinants of a recommendation’s direction and strength. J Clin Epidemiol. 2013;66(7):726–35.

Fleming PS, Koletsi D, Ioannidis JPA, Pandis N. High quality of the evidence for medical and other health-related interventions was uncommon in Cochrane systematic reviews. J Clin Epidemiol. 2016;78:34–42.

Howick J, Koletsi D, Pandis N, Fleming PS, Loef M, Walach H, et al. The quality of evidence for medical interventions does not improve or worsen: a metaepidemiological study of Cochrane reviews. J Clin Epidemiol. 2020;126:154–9.

Mustafa RA, Santesso N, Brozek J, Akl EA, Walter SD, Norman G, et al. The GRADE approach is reproducible in assessing the quality of evidence of quantitative evidence syntheses. J Clin Epidemiol. 2013;66(7):736-742.e5.

Schünemann H, Brozek J, Guyatt G, Oxman A, editors. Section 5.4: Overall quality of evidence. GRADE Handbook. GRADE; 2013. Available from: https://gdt.gradepro.org/app/handbook/handbook.html#h.lr8e9vq954a . Cited 2022 Mar 25.

GRADE Working Group. Criteria for using GRADE. GRADE; 2016. Available from: https://www.gradeworkinggroup.org/docs/Criteria_for_using_GRADE_2016-04-05.pdf . Cited 2022 Jan 26

Werner SS, Binder N, Toews I, Schünemann HJ, Meerpohl JJ, Schwingshackl L. Use of GRADE in evidence syntheses published in high-impact-factor nutrition journals: a methodological survey. J Clin Epidemiol. 2021;135:54–69.

Zhang S, Wu QJ, Liu SX. A methodologic survey on use of the GRADE approach in evidence syntheses published in high-impact factor urology and nephrology journals. BMC Med Res Methodol. 2022;22(1):220.

Li L, Tian J, Tian H, Sun R, Liu Y, Yang K. Quality and transparency of overviews of systematic reviews. J Evid Based Med. 2012;5(3):166–73.

Pieper D, Buechter R, Jerinic P, Eikermann M. Overviews of reviews often have limited rigor: a systematic review. J Clin Epidemiol. 2012;65(12):1267–73.

Cochrane Editorial Unit. Appendix 1: Checklist for auditing GRADE and SoF tables in protocols of intervention reviews. Cochrane Training; 2022. Available from: https://training.cochrane.org/gomo/modules/522/resources/8307/Checklist for GRADE and SoF methods in Protocols for Gomo.pdf. Cited 2022 Mar 12

Ryan R, Hill S. How to GRADE the quality of the evidence. Cochrane Consumers and Communication Group. Cochrane; 2016. Available from: https://cccrg.cochrane.org/author-resources .

Cunningham M, France EF, Ring N, Uny I, Duncan EA, Roberts RJ, et al. Developing a reporting guideline to improve meta-ethnography in health research: the eMERGe mixed-methods study. Heal Serv Deliv Res. 2019;7(4):1–116.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12:181.

Gates M, Gates G, Pieper D, Fernandes R, Tricco A, Moher D, et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ. 2022;378:e070849.

Whiting PF, Reitsma JB, Leeflang MMG, Sterne JAC, Bossuyt PMM, Rutjes AWSS, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(4):529–36.

Hayden JA, van der Windt DA, Cartwright JL, Co P. Research and reporting methods assessing bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.

Critical Appraisal Skills Programme. CASP qualitative checklist. CASP; 2018. Available from: https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf . Cited 2022 Apr 26

Hannes K, Lockwood C, Pearson A. A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qual Health Res. 2010;20(12):1736–43.

Munn Z, Moola S, Riitano D, Lisy K. The development of a critical appraisal tool for use in systematic reviews addressing questions of prevalence. Int J Heal Policy Manag. 2014;3(3):123–8.

Lewin S, Bohren M, Rashidian A, Munthe-Kaas H, Glenton C, Colvin CJ, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 2: how to make an overall CERQual assessment of confidence and create a Summary of Qualitative Findings table. Implement Sci. 2018;13(suppl 1):10.

Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A.  Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol. 2014;14(1):108.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series—paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. J Clin Epidemiol. 2018;97:79–85.

Lockwood C, Munn Z, Porritt K. Qualitative research synthesis:  methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Health. 2015;13(3):179–87.

Schünemann HJ, Mustafa RA, Brozek J, Steingart KR, Leeflang M, Murad MH, et al. GRADE guidelines: 21 part 1.  Study design, risk of bias, and indirectness in rating the certainty across a body of evidence for test accuracy. J Clin Epidemiol. 2020;122:129–41.

Schünemann HJ, Mustafa RA, Brozek J, Steingart KR, Leeflang M, Murad MH, et al. GRADE guidelines: 21 part 2. Test accuracy: inconsistency, imprecision, publication bias, and other domains for rating the certainty of evidence and presenting it in evidence profiles and summary of findings tables. J Clin Epidemiol. 2020;122:142–52.

Foroutan F, Guyatt G, Zuk V, Vandvik PO, Alba AC, Mustafa R, et al. GRADE Guidelines 28: use of GRADE for the assessment of evidence about prognostic factors:  rating certainty in identification of groups of patients with different absolute risks. J Clin Epidemiol. 2020;121:62–70.

Janiaud P, Agarwal A, Belbasis L, Tzoulaki I. An umbrella review of umbrella reviews for non-randomized observational evidence on putative risk and protective factors [internet]. OSF protocol; 2021 [cited 2022 May 28]. Available from: https://osf.io/xj5cf/ .

Mokkink LB, Prinsen CA, Patrick DL, Alonso J, Bouter LM, et al. COSMIN methodology for systematic reviews of Patient-Reported Outcome Measures (PROMs) - user manual. COSMIN; 2018 [cited 2022 Feb 15]. Available from:  http://www.cosmin.nl/ .

Thomas J, M P, Noyes J, Chandler J, Rehfuess E, Tugwell P, et al. Chapter 17: Intervention complexity. In: Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al., editors. Cochrane handbook for systematic reviews of interventions. Cochrane; 2022. Available from: https://training.cochrane.org/handbook/current/chapter-17 . Cited 2022 Oct 12

Guise JM, Chang C, Butler M, Viswanathan M, Tugwell P. AHRQ series on complex intervention systematic reviews—paper 1: an introduction to a series of articles that provide guidance and tools for reviews of complex interventions. J Clin Epidemiol. 2017;90:6–10.

Riaz IB, He H, Ryu AJ, Siddiqi R, Naqvi SAA, Yao Y, et al. A living, interactive systematic review and network meta-analysis of first-line treatment of metastatic renal cell carcinoma [formula presented]. Eur Urol. 2021;80(6):712–23.

Créquit P, Trinquart L, Ravaud P. Live cumulative network meta-analysis: protocol for second-line treatments in advanced non-small-cell lung cancer with wild-type or unknown status for epidermal growth factor receptor. BMJ Open. 2016;6(8):e011841.

Ravaud P, Créquit P, Williams HC, Meerpohl J, Craig JC, Boutron I. Future of evidence ecosystem series: 3. From an evidence synthesis ecosystem to an evidence ecosystem. J Clin Epidemiol. 2020;123:153–61.

Download references

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Author information

Authors and affiliations.

Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA

Kat Kolaski

Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA

Lynne Romeiser Logan

Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA

John P. A. Ioannidis

You can also search for this author in PubMed   Google Scholar

Contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Kat Kolaski .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’ s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Supplementary Information

Additional file 2a..

Overviews, scoping reviews, rapid reviews and living reviews.

Additional file 2B.

Practical scheme for distinguishing types of research evidence.

Additional file 4.

Presentation of forest plots.

Additional file 5A.

Illustrations of the GRADE approach.

Additional file 5B.

 Adaptations of GRADE for evidence syntheses.

Additional file 6.

 Links to Concise Guide online resources.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kolaski, K., Logan, L.R. & Ioannidis, J.P.A. Guidance to best tools and practices for systematic reviews. Syst Rev 12 , 96 (2023). https://doi.org/10.1186/s13643-023-02255-9

Download citation

Received : 03 October 2022

Accepted : 19 February 2023

Published : 08 June 2023

DOI : https://doi.org/10.1186/s13643-023-02255-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Certainty of evidence
  • Critical appraisal
  • Methodological quality
  • Risk of bias
  • Systematic review

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic review and systematic literature review

  • Subscriptions
  • Advanced search

systematic review and systematic literature review

Advanced Search

A systematic literature review of the clinical and socioeconomic burden of bronchiectasis

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: [email protected]
  • ORCID record for Marcus A. Mall
  • ORCID record for Michal Shteinberg
  • ORCID record for Sanjay H. Chotirmall
  • Figures & Data
  • Info & Metrics

Background The overall burden of bronchiectasis on patients and healthcare systems has not been comprehensively described. Here, we present the findings of a systematic literature review that assessed the clinical and socioeconomic burden of bronchiectasis with subanalyses by aetiology (PROSPERO registration: CRD42023404162).

Methods Embase, MEDLINE and the Cochrane Library were searched for publications relating to bronchiectasis disease burden (December 2017–December 2022). Journal articles and congress abstracts reporting on observational studies, randomised controlled trials and registry studies were included. Editorials, narrative reviews and systematic literature reviews were included to identify primary studies. PRISMA guidelines were followed.

Results 1585 unique publications were identified, of which 587 full texts were screened and 149 were included. A further 189 citations were included from reference lists of editorials and reviews, resulting in 338 total publications. Commonly reported symptoms and complications included dyspnoea, cough, wheezing, sputum production, haemoptysis and exacerbations. Disease severity across several indices and increased mortality compared with the general population was reported. Bronchiectasis impacted quality of life across several patient-reported outcomes, with patients experiencing fatigue, anxiety and depression. Healthcare resource utilisation was considerable and substantial medical costs related to hospitalisations, treatments and emergency department and outpatient visits were accrued. Indirect costs included sick pay and lost income.

Conclusions Bronchiectasis causes significant clinical and socioeconomic burden. Disease-modifying therapies that reduce symptoms, improve quality of life and reduce both healthcare resource utilisation and overall costs are needed. Further systematic analyses of specific aetiologies and paediatric disease may provide more insight into unmet therapeutic needs.

  • Shareable abstract

Bronchiectasis imposes a significant clinical and socioeconomic burden on patients, their families and employers, and on healthcare systems. Therapies that reduce symptoms, improve quality of life and reduce resource use and overall costs are needed. https://bit.ly/4bPCHlp

  • Introduction

Bronchiectasis is a heterogeneous chronic respiratory disease clinically characterised by chronic cough, excessive sputum production and recurrent pulmonary exacerbations [ 1 ], and radiologically characterised by the abnormal widening of the bronchi [ 2 ]. Bronchiectasis is associated with several genetic, autoimmune, airway and infectious disorders [ 3 ]. Regardless of the underlying cause, the defining features of bronchiectasis are chronic airway inflammation and infection, regionally impaired mucociliary clearance, mucus hypersecretion and mucus obstruction, as well as progressive structural lung damage [ 4 , 5 ]. These features perpetuate one another in a “vicious vortex” leading to a decline in lung function, pulmonary exacerbations and associated morbidity, mortality and worsened quality of life [ 4 , 5 ]. Bronchiectasis can be further categorised into several infective and inflammatory endotypes and is associated with multiple comorbidities and underlying aetiologies [ 6 ].

Bronchiectasis has been described as an emerging global epidemic [ 7 ], with prevalence and incidence rates increasing worldwide [ 8 – 12 ]. The prevalence of bronchiectasis, as well as of the individual aetiologies, varies widely across geographic regions [ 13 ]. In Europe, the reported prevalence ranges from 39.1 (females) and 33.3 (males) cases per 100 000 inhabitants in Spain and 68 (females) and 65 (males) cases per 100 000 inhabitants in Germany, to as high as 566 cases (females) and 486 cases (males) per 100 000 inhabitants in the UK [ 10 – 12 ]. In the US, the average overall prevalence was reported to be 139 cases per 100 000 [ 14 ], in Israel, the prevalence was reported to be 234 cases per 100 000 [ 15 ], and in China the prevalence was reported to be 174 per 100 000 [ 8 ]. Studies show that bronchiectasis prevalence increases with age [ 14 ]. This may increase the socioeconomic impact of bronchiectasis on countries with disproportionately higher number of older citizens. Large registry studies in patients with bronchiectasis have been published from the US (Bronchiectasis Research Registry) [ 16 ], Europe and Israel (European Multicentre Bronchiectasis Audit and Research Collaboration (EMBARC)); the largest and most comprehensive report available to date) [ 17 ], India (EMBARC-India) [ 18 , 19 ], Korea (Korean Multicentre Bronchiectasis Audit and Research Collaboration) [ 20 ] and Australia (Australian Bronchiectasis Registry) [ 21 ].

Although there are currently no approved disease-modifying therapies for bronchiectasis [ 4 ], comprehensive clinical care recommendations for the management of patients with bronchiectasis have been published [ 22 , 23 ]. However, the burden that bronchiectasis imposes on patients and their families, as well as on healthcare systems, payers and employers, remains poorly understood. No review to date has used a systematic method to evaluate the overall disease burden of bronchiectasis. This is the first systematic literature review aimed at investigating and synthesising the clinical and socioeconomic burden of bronchiectasis. A better understanding of the overarching burden of bronchiectasis, both overall and by individual aetiologies and associated diseases, will highlight the need for new therapies and assist healthcare systems in planning care and required resources.

The protocol of this systematic review was registered on PROSPERO (reference number: CRD42023404162).

Search strategy

This systematic literature review was conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines [ 24 ]. Embase, MEDLINE and the Cochrane Library were searched for studies related to the clinical and socioeconomic burden of bronchiectasis (noncystic fibrosis bronchiectasis (NCFBE) and cystic fibrosis bronchiectasis (CFBE)) using the search terms available in supplementary table S1 . Articles written in English and published over a 5-year period (December 2017–December 2022) were included.

Selection criteria

The following article types reporting on prospective and retrospective observational studies, registry studies and randomised controlled trials (only baseline data extracted) were included: journal articles, preprints, research letters, conference proceedings, conference papers, conference abstracts, meeting abstracts and meeting posters. Reviews, literature reviews, systematic reviews and meta-analyses, as well as editorials, commentaries, letters and letters to the editor, were included for the purpose of identifying primary studies. A manual search of references cited in selected articles was performed and references were only included if they were published within the 5 years prior to the primary article being published.

Screening and data extraction

A reviewer screened all titles and abstracts to identify publications for full-text review. These publications then underwent full-text screening by the same reviewer for potential inclusion. A second reviewer independently verified the results of both the title/abstract screen and the full-text screen. Any discrepancies were resolved by a third independent reviewer. Data relating to aetiology, symptoms, disease severity, exacerbations, lung function, infection, comorbidities, patient-reported outcomes (PROs), exercise capacity, mortality, impact on family and caregivers, healthcare resource utilisation (HCRU), treatment burden, medical costs, and indirect impacts and costs, as well as data relating to the patient population, study design, sample size and country/countries of origin, were extracted from the final set of publications into a standardised Excel spreadsheet by one reviewer. Studies were grouped based on the burden measure, and aggregate data (range of reported values) were summarised in table or figure format. For the economic burden section, costs extracted from studies reporting in currencies other than the euros were converted to euros based on the average exchange rate for the year in which the study was conducted.

Data from patients with specific bronchiectasis aetiologies and in children (age limits varied from study to study and included upper age limits of 15, 18, 19 and 20 years) were reported separately, where available. As literature relating to NCFBE and CFBE is generally distinct, any data related to CFBE are reported separately in the tables and text. We conducted subanalyses of key disease burden indicators, in which we extracted data from multicentre studies or those with a sample size >1000 subjects, to try to identify estimates from the most representative datasets. These data from larger and multicentre studies are reported in square brackets in tables 1 – 3 and supplementary tables S2–S7 , where available.

  • View inline

Prevalence and severity of bronchiectasis symptoms overall, in children, during exacerbations and in individual bronchiectasis aetiologies

Patient-reported outcome scores in patients with bronchiectasis overall and in individual bronchiectasis aetiologies

Healthcare resource utilisation (HCRU) in patients with bronchiectasis overall and in individual bronchiectasis aetiologies

Given the nature of the data included in this systematic literature review (that is, a broad range of patient clinical and socioeconomic characteristics rather than the outcome(s) of an intervention), in addition to the broad range of study types included, meta-analyses to statistically combine data of similar studies were not deemed appropriate and therefore not performed.

Summary of included studies

A total of 1834 citations were retrieved from the Embase, MEDLINE and Cochrane Library databases, of which 1585 unique citations were identified. Abstract/title screening led to the inclusion of 587 citations for full-text screening. Following full-text screening, 149 primary citations and 110 literature reviews, systematic reviews and meta-analyses as well as editorials and letters to the editor remained. From the reference lists of these 110 citations, a further 189 primary citations were identified. These articles were only included if 1) the primary articles contained data relating to the burden of bronchiectasis and 2) the primary articles were published within the 5 years prior to the original article's publication date. In total, 338 publications were considered eligible and included in this review ( supplementary figure S1 ). This included 279 journal articles, 46 congress abstracts and 13 letters to the editor or scientific/research letters. The results are summarised in the sections below. For the results from individual studies, including a description of the patient population, study design, sample size and country/countries of origin, please see the supplemental Excel file .

The most frequently reported aetiologies included post-infectious, genetic (primary ciliary dyskinesia (PCD), alpha-1 antitrypsin deficiency (AATD) and cystic fibrosis (CF)), airway diseases (COPD and asthma), allergic bronchopulmonary aspergillosis (ABPA), aspiration and reflux-related, immunodeficiency and autoimmune aetiologies ( supplementary figure S2 ). However, in up to 80.7% of adult cases and 53.3% of paediatric cases, the aetiology was not determined (referred to as “idiopathic bronchiectasis”) ( supplementary figure S2 ). When limited to larger or multicentre studies, the frequency of idiopathic bronchiectasis ranged from 11.5 to 66.0% in adults and from 16.5 to 29.4% in children. Further details and additional aetiologies can be seen in the supplemental Excel file .

Clinical burden

Symptom burden and severity.

Commonly reported symptoms in patients with bronchiectasis included cough, sputum production, dyspnoea, wheezing and haemoptysis, with these symptoms more prevalent in adults compared with children ( table 1 ). Other reported symptoms included chest discomfort, pain or tightness (both generally and during an exacerbation), fever and weight loss in both adults and children, and fatigue, tiredness or asthenia, appetite loss, and sweating in adults. In children, respiratory distress, hypoxia during an exacerbation, sneezing, nasal and ear discharge, thriving poorly including poor growth and weight loss, exercise intolerance, malaise, night sweats, abdominal pain, recurrent vomiting, and diarrhoea were reported ( supplemental Excel file ). Classic bronchiectasis symptoms such as sputum production (range of patients reporting sputum production across all studies: 22.0–92.7%) and cough (range of patients reporting cough across all studies: 24.0–98.5%) were not universally reported ( table 1 ).

In a study comparing bronchiectasis (excluding CFBE) in different age groups (younger adults (18–65 years), older adults (66–75 years) and elderly adults (≥76 years) [ 63 ]), no significant differences across age groups were reported for the presence of cough (younger adults: 73.9%; older adults: 72.8%; elderly adults: 72.9%; p=0.90), sputum production (younger adults: 57.8%; older adults: 63.8%; elderly adults: 6.0%; p=0.16) or haemoptysis (younger adults: 16.5%; older adults: 19.3%; elderly adults: 16.3%; p=0.47).

Disease severity

Disease severity was reported according to several measures including the bronchiectasis severity index (BSI), the forced expiratory volume in 1 s (FEV 1 ), Age, Chronic Colonisation, Extension, Dyspnoea (FACED) score and the Exacerbations-FACED (E-FACED) score, all of which are known to be associated with future exacerbations, hospitalisations and mortality ( supplementary table S2 and the supplemental Excel file ). Up to 78.7, 41.8 and 40.8% of patients with bronchiectasis reported severe disease according to the BSI, FACED score and E-FACED score, respectively ( supplementary table S2 ). In most studies, severity scores were greater among people with bronchiectasis secondary to COPD or post-tuberculosis (TB) than idiopathic bronchiectasis ( supplementary table S2 ). No data relating to disease severity were reported for CFBE specifically.

Exacerbations

The number of exacerbations experienced by patients with bronchiectasis in the previous year, per year and during follow-up are presented in figure 1 . For further details, please see the supplemental Excel file . Two studies reported exacerbation length in patients with bronchiectasis; this ranged from 11 to 16 days (both small studies; sample sizes of 191 and 32, respectively) [ 25 , 64 ]. A study in children with NCFBE reported a median of one exacerbation in the previous year. Additionally, the same study reported that 31.1% of children with bronchiectasis experienced ≥3 exacerbations per year [ 65 ].

  • Download figure
  • Open in new tab
  • Download powerpoint

Range of bronchiectasis exacerbations in the previous year, per year and in the first and second years of follow-up. # : Two studies reported significant differences in the number of exacerbations experienced in the previous year across individual aetiologies. Study 1 [ 90 ]: Patients with idiopathic bronchiectasis had significantly fewer exacerbations in the previous year compared with other aetiologies (primary ciliary dyskinesia (PCD), COPD and post-infectious) (p<0.021). Study 2 [ 33 ]: significant difference between post-tuberculosis (TB) bronchiectasis (mean: 2.8) and other aetiologies excluding idiopathic bronchiectasis (mean: 1.7) (p<0.05).

Lung function

Reduced lung function was reported across several different measures in adults and children with bronchiectasis overall, including FEV 1 (absolute values and % predicted), forced vital capacity (FVC; absolute values and % pred) and lung clearance index (adults only) ( supplementary table S3 and the supplemental Excel file ). In most studies, lung function was lowest among people with post-TB bronchiectasis and bronchiectasis secondary to COPD or PCD ( supplementary table S2 ). Additional measures of lung function are detailed in the supplemental Excel file . Lung clearance index, considered more sensitive than spirometry to early airway damage, was elevated in two studies in adults with bronchiectasis, with a range of 9.0–12.8 (normal: 6–7 or less) [ 66 , 67 ].

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups, elderly adults (≥76 years) had significantly lower FEV 1 % pred (median: 67) compared with both younger (18–65 years; median: 78) and older adults (66–75 years; median: 75) (p<0.017 for both comparisons) [ 63 ]. FVC % pred was found to be significantly lower in elderly adults (mean: 65) compared with both younger adults (median: 78) and older adults (median: 75) (p<0.017 for both comparisons) [ 63 ].

Chronic infection with at least one pathogen was reported in 22.3–79.6% of patients with bronchiectasis, although each study defined chronic infection differently (number of studies: 20). When limited to larger or multicentre studies, chronic infection with at least one pathogen was reported in 10.7–54.5% of patients with bronchiectasis (number of studies: 12). In two studies in NCFBE, significant differences in the proportion of patients chronically infected with at least one pathogen were reported across aetiologies (p<0.001 for both studies) [ 68 , 69 ]. Patients with post-infectious (other than TB) bronchiectasis (34.9%) [ 68 ] and patients with PCD-related bronchiectasis (68.3%) [ 69 ] had the highest prevalence of chronic infection.

The most commonly reported bacterial and fungal pathogens are shown in supplementary table S4 . The two most common bacterial pathogens were Pseudomonas ( P .) aeruginosa and Haemophilus ( H. ) influenzae . In several studies, more patients with PCD, TB and COPD as the aetiology of their bronchiectasis reported infection with P. aeruginosa . Additionally, in one study, significantly more children with CFBE had P. aeruginosa infection compared with children with NCFBE [ 70 ]. Further details and additional pathogens are reported in the supplemental Excel file .

Diversity of the sputum microbiome was assessed in two studies. In the first study in people with bronchiectasis (people with CFBE excluded), reduced microbiome alpha diversity (defined as the relative abundance of microbial species within a sample), particularly associated with Pseudomonas or Proteobacteria dominance, was associated with greater disease severity, increased frequency and severity of exacerbations, and a higher risk of mortality [ 71 ]. In the second study (unknown whether people with CFBE were excluded), a lower Shannon–Wiener diversity index (a measure of species diversity, with lower scores indicating lower diversity) score was associated with multiple markers of disease severity, including a higher BSI score (p=0.0003) and more frequent exacerbations (p=0.008) [ 72 ].

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups (younger adults: 18–65 years; older adults: 66–75 years; elderly adults: ≥76 years) [ 63 ], chronic infection with H. influenzae was reported in 18.3% of younger adults, 12.8% of older adults and 8.8% of elderly adults, and chronic infection with Streptococcus ( Str. ) pneumoniae was reported in 5.3% of younger adults, 2.8% of older adults and 1.3% of elderly adults. For both of the above, the prevalence was significantly higher in younger adults compared with elderly adults (p<0.017 for both comparisons). However, no significant differences across age groups were reported for P. aeruginosa , Moraxella catarrhalis or Staphylococcus ( Sta .) aureus chronic infection.

P. aeruginosa infection was significantly associated with reduced FEV 1 [ 73 ], more severe disease [ 74 ], more frequent exacerbations [ 35 , 49 , 75 , 76 ], increased hospital admissions, reduced quality of life based on St. George's Respiratory Questionnaire (SGRQ) and increased and 4-year mortality [ 49 , 76 ]. Additionally, in a study reporting healthcare use and costs in the US between 2007–2013, healthcare costs and hospitalisation costs were found to be increased in patients infected with P. aeruginosa ($56 499 and $41 972 more than patients not infected with P. aeruginosa , respectively) [ 77 ]. In the same study, HCRU was also higher in patients infected with P. aeruginosa (fivefold increase in the number of hospitalisations and 84% more emergency department (ED) visits compared with patients not infected with P. aeruginosa ) [ 77 ].

Comorbidities

The most frequently reported comorbidities included cardiovascular (including heart failure, cerebrovascular disease and hypertension), respiratory (including asthma, COPD and sinusitis), metabolic (including diabetes and dyslipidaemia), malignancy (including haematological and solid malignancies), bone and joint-related (including osteoporosis and rheumatological disease), neurological (including anxiety and depression), renal, hepatic, and gastrointestinal comorbidities ( supplementary table S5 ). No data relating to comorbidities were reported for CFBE specifically. For further details and additional comorbidities, please see the supplemental Excel file .

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups (younger adults: 18–65 years; older adults: 66–75 years; elderly adults: ≥76 years), younger adults had a significantly lower prevalence of diabetes compared with older adults, a significantly lower prevalence of stroke compared with elderly adults and a significantly lower prevalence of heart failure, solid tumours and renal failure compared with both older and elderly adults (p<0.0017 for all comparisons). Additionally, the prevalence of COPD was significantly lower in both younger and older adults compared with elderly adults (p<0.017) [ 63 ]. In studies reporting in children with bronchiectasis, the prevalence of comorbid asthma ranged from 22.2 to 25.8% [ 65 , 78 ] and the prevalence of sinusitis was reported to be 12.7% in a single study [ 79 ].

Charlson comorbidity index (CCI)

CCI scores can range from 0 to 37, with higher scores indicating a decreased estimate of 10-year survival. In this review, CCI scores ranged from 0.7 to 6.6 in studies reporting means (number of studies: 7). In one study, adults with bronchiectasis (people with CFBE excluded) who experienced ≥2 exacerbations per year were found to have significantly higher CCI scores (3.3) compared with patients who experienced less than two exacerbations per year (2.2) (p=0.001) [ 35 ]. In another study in adults with bronchiectasis (people with CFBE excluded), CCI scores increased significantly with increasing disease severity, with patients with mild (FACED score of 0–2), moderate (FACED score of 3–4) and severe (FACED score of 5–7) bronchiectasis reporting mean CCI scores of 3.9, 5.7 and 6.3, respectively [ 80 ]. No CCI scores were reported for CFBE specifically.

Prevalence of comorbidities in patients with bronchiectasis compared with control individuals

Several studies reported a higher prevalence of cardiovascular comorbidities. such as heart failure [ 81 ], stroke [ 82 , 83 ] and hypertension [ 82 – 84 ] in patients with bronchiectasis compared with a matched general population or healthy controls. Conversely, several additional studies reported no significant differences [ 81 , 85 , 86 ]. Two large studies reported an increased prevalence of diabetes in patients with bronchiectasis compared with nonbronchiectasis control groups [ 83 , 84 ]; however, three additional smaller studies reported no significant differences [ 81 , 82 , 86 ]. The prevalence of gastro–oesophageal reflux disease was found to be significantly higher in patients with bronchiectasis compared with matched nonbronchiectasis controls in one study [ 87 ], but no significant difference was reported in a second study [ 85 ]. Both anxiety and depression were found to be significantly more prevalent in patients with bronchiectasis compared with matched healthy controls in one study [ 55 ]. Lastly, two large studies reported an increased prevalence of asthma [ 84 , 87 ] and five studies reported a significantly higher prevalence of COPD [ 81 , 82 , 84 , 85 , 87 ] in patients with bronchiectasis compared with matched nonbronchiectasis controls or the general population. A smaller study reported conflicting evidence whereby no significant difference in the prevalence of asthma in patients with bronchiectasis compared with matched controls was reported [ 85 ].

Socioeconomic burden

Patient-reported outcomes.

Health-related quality of life (HRQoL), fatigue, anxiety and depression were reported across several PRO measures and domains. The most frequently reported PROs are discussed in further detail in the sections below ( table 2 ). Further details and additional PROs can be seen in the supplemental Excel file .

In a study comparing bronchiectasis (people with CFBE excluded) in different age groups (younger adults: 18–65 years; older adults: 66–75 years; elderly adults: ≥76 years), the median SGRQ total score was significantly higher in elderly adults (50.8) compared with younger adults (36.1), indicating a higher degree of limitation (p=0.017) [ 63 ].

In a study that reported Leicester Cough Questionnaire (LCQ) scores in men and women with bronchiectasis (people with CFBE excluded) separately, women had significantly lower LCQ total scores (14.9) when compared with men (17.5) (p=0.006), indicating worse quality of life [ 88 ]. Additionally, women had significantly lower scores across all three LCQ domains (p=0.014, p=0.005 and p=0.011 for physical, psychological and social domains, respectively) [ 88 ].

Exercise capacity

Exercise capacity in patients with bronchiectasis was reported using walking tests namely the 6-minute walk test (6MWT) and the incremental shuttle walk test (ISWT) ( supplementary table S6 ). The 6MWT data from patients with bronchiectasis generally fell within the normal range for healthy people; however, the ISWT data was below the normal range for healthy people ( supplementary table S6 ). Studies also reported on daily physical activity, daily sedentary time and number of steps per day in patients with bronchiectasis, and in children specifically ( supplementary table S6 ). No data relating to disease severity were reported for CFBE specifically. Further details can be seen in the supplemental Excel file .

Exercise capacity in patients with bronchiectasis compared with control individuals

In one study, the ISWT distance was reported to be significantly lower in patients with NCFBE compared with healthy controls (592.6 m versus 882.9 m; difference of ∼290 m; p<0.001) [ 89 ]. Additionally, patients with bronchiectasis spent significantly less time on activities of moderate and vigorous intensity compared with healthy controls (p=0.030 and 0.044, respectively) [ 89 ]. Lastly, a study reported that patients with NCFBE had a significantly lower step count per day compared with healthy controls (p<0.001) [ 89 ].

Mortality rate during study period

Mortality ranged from 0.24 to 67.6%; however, it should be noted that the study duration differed across studies. When limited to larger or multicentre studies, the mortality rate ranged from 0.24 to 28.1%. One study reported more deaths in patients with NCFBE (9.1%; 5.9-year mean follow-up period) compared with patients without bronchiectasis (0.8%; 5.4-year mean follow-up period) [ 84 ]. In one study, significantly more patients with COPD-related bronchiectasis died (37.5%) compared with other aetiologies (19.0%) (3.4-year mean follow-up period; p<0.001). After adjusting for several factors, multivariate analysis showed that the diagnosis of COPD as the primary cause of bronchiectasis increased the risk of death by 1.77 compared with the patients with other aetiologies [ 41 ]. Similarly, in another study, COPD-associated bronchiectasis was associated with higher mortality (55%) in multivariate analysis as compared with other aetiologies (rheumatic disease: 20%; post-infectious: 16%; idiopathic: 14%; ABPA: 13%; immunodeficiency: 11%) (hazard ratio 2.12, 95% CI 1.04–4.30; p=0.038; 5.2-year median follow-up period) [ 90 ].

Mortality rates by year

The 1-, 2-, 3-, 4- and 5-year mortality rates in patients with bronchiectasis (people with CFBE excluded, unless unspecified) ranged from 0.0 to 12.3%, 0.0 to 13.0%, 0.0 to 21.0%, 5.5 to 39.1% and 12.4 to 53.0%, respectively (number of studies: 9, 4, 7, 1 and 4, respectively). When limited to larger or multicentre studies, the 1-, 2-, 3- and 5-year mortality rates ranges were 0.4–7.9%, 3.9–13.0%, 3.7–21.0% and 12.4–53.0% (no 4-year mortality data from larger or multicentre studies). No data relating to mortality rates were reported for CFBE specifically.

Two studies reported mortality rate by bronchiectasis aetiology (people with CFBE excluded). In the first study, no significant difference in the 4-year mortality rate was reported across aetiologies (p=0.7; inflammatory bowel disease: 14.3%; post-TB: 13.4%; rheumatoid arthritis: 11.4%; idiopathic or post-infectious: 10.1%; ABPA: 6.1%; other aetiologies: 6.1%) [ 49 ]. In the second study, patients with post-TB bronchiectasis had a significantly higher 5-year mortality rate (30.0%) compared with patients with idiopathic bronchiectasis (18.0%) and other aetiologies (10.0%) (p<0.05 for both comparisons) [ 32 ].

In-hospital and intensive care unit mortality

In-hospital mortality ranged from 2.9 to 59.3% in patients with bronchiectasis (people with CFBE excluded, unless unspecified) hospitalised for an exacerbation or for other reasons (number of studies: 7). When limited to larger or multicentre studies, in-hospital mortality rate was reported in only one study (33.0%). One study reported mortality in bronchiectasis patients admitted to a tertiary care centre according to aetiology; in-hospital mortality was highest in patients with post-pneumonia bronchiectasis (15.8%), followed by patients with idiopathic (7.1%) and post-TB (2.6%) bronchiectasis. No deaths were reported in patients with COPD, ABPA or PCD aetiologies [ 42 ]. Intensive care unit mortality was reported in two studies and ranged from 24.6 to 36.1% [ 62 , 91 ]. No data relating to mortality rates were reported for CFBE specifically.

Impact on family and caregivers

Only two studies discussed the impact that having a child with bronchiectasis has on parents/caregivers. In the first study, parents of children with bronchiectasis (not specified whether children with CFBE were excluded) were more anxious and more depressed according to both the Hospital Anxiety and Depression Scale (HADS) and the Centre of Epidemiological Studies depression scale, compared with parents of children without any respiratory conditions (both p<0.001; sample size of 29 participants) [ 53 ]. In the second study, parents or carers of children with bronchiectasis (multicentre study with a sample size of 141 participants; children with CFBE excluded) were asked to vote for their top five greatest concerns or worries; the most common worries or concerns that were voted for by over 15% of parents were “impact on his/her adult life in the future, long-term effects, normal life” (29.8%), “ongoing declining health” (25.5%), “the cough” (24.8%), “impact on his/her life now as a child (play, development)” (24.1%), “lack of sleep/being tired” (24.1%), “concerns over aspects of antibiotic use” (22.7%), “missing school or daycare” (17.7%) and “breathing difficulties/shortness of breath” (16.3%) [ 92 ].

HCRU in terms of hospitalisations, ED visits, outpatient visits and length of stay overall and by bronchiectasis aetiology are reported in table 3 . No data relating to HCRU were reported for CFBE specifically.

In a study in children with bronchiectasis (children with CFBE excluded), 30.0% of children were hospitalised at least once in the previous year [ 65 ]. The median number of hospitalisations per year was 0 (interquartile range: 0–1) [ 65 ]. In another study, the mean length of hospital stay for children with bronchiectasis was 6.7 days (standard deviation: 4.8 days) [ 93 ]. In a study comparing bronchiectasis (people with CFBE excluded) in different age groups, significantly more elderly adults (≥76 years; 26.0%) were hospitalised at least once during the first year of follow-up compared with younger adults (18–65 years; 17.0%) and older adults (66–75 years; 17.0%) (p<0.017 for both comparisons) [ 63 ]. Additionally, length of stay was found to be significantly longer in male patients (mean: 17.6 days) compared with female patients (mean: 12.5 days) (p=0.03) [ 94 ].

HCRU in patients with bronchiectasis compared with control individuals

Length of stay was found to be 38% higher in patients with bronchiectasis (mean: 15.4 days; people with CFBE excluded) compared with patients with any other respiratory illness (mean: 9.6 days) (p<0.001) [ 94 ]. In a study reporting on HCRU in patients with bronchiectasis (people with CFBE excluded) over a 3-year period (Germany; 2012–2015) [ 85 ], a mean of 24.7 outpatient appointments per patient were reported; there was no significant difference in the number of outpatient appointments between patients with bronchiectasis and matched controls (patients without bronchiectasis matched by age, sex and distribution, and level of comorbidities) (mean: 23.4) (p=0.12). When assessing specific outpatient appointments over the 3-year period, patients with bronchiectasis attended a mean of 9.2 general practitioner appointments, 2.9 radiology appointments, 2.5 chest physician appointments and 0.8 cardiologist appointments. Patients with bronchiectasis had significantly fewer general practitioner appointments compared with matched controls (mean: 9.8) (p=0.002); however, they had significantly more radiology appointments (mean for matched controls: 2.3) and chest physician appointments (mean for matched controls: 1.4) compared with matched controls (p<0.001 for both comparisons).

Hospital admission rates

In England, Wales and Northern Ireland, the crude hospital admission rate in 2013 was 88.4 (95% CI 74.0–105.6) per 100 000 person-years [ 91 ]. In New Zealand (2008–2013), the crude and adjusted hospital admission rates were 25.7 and 20.4 per 100 000 population, respectively [ 95 ]. Lastly, in Australia and New Zealand (2004–2008) the hospital admission rate ranged from 0.7 to 2.9 per person-year [ 96 ]. In all of the abovementioned studies, people with CFBE were excluded.

Treatment burden

In two studies, the percentage of patients with bronchiectasis receiving any respiratory medication at baseline ranged from 60.8 to 85.7% [ 97 , 98 ]. Additionally, in a study comparing healthcare costs in patients with bronchiectasis before and after confirmation of P. aeruginosa infection, mean pharmacy visits in the year preceding diagnosis were reported to be 23.2; this increased significantly by 56.5% to 36.2 in the year post-diagnosis (p<0.0001) [ 99 ]. In another study, patients with bronchiectasis were prescribed a mean of 12 medications for bronchiectasis and other comorbidities [ 100 ]. In all of the abovementioned studies, people with CFBE were excluded. The most frequently reported respiratory treatments can be seen in supplementary table S7 . These included antibiotics (including macrolides), corticosteroids, bronchodilators, mucolytics and oxygen. No treatment data were reported for CFBE specifically. Other respiratory treatments included saline, anticholinergics and leukotriene receptor antagonists ( supplemental Excel file ).

In studies reporting in children with bronchiectasis, 23.9% of children were receiving any bronchodilator at baseline [ 101 ], 9.0–21.7% of children were receiving inhaled corticosteroids (ICS) at baseline [ 101 , 102 ], 4.3% of children were receiving oral corticosteroids at baseline [ 101 ] and 12.1% of children were receiving long-term oxygen therapy [ 103 ].

Medical and nonmedical indirect impacts and costs

Medical costs for bronchiectasis included overall costs, hospitalisation costs, ED visits and outpatient visit costs and costs of treatment; indirect impacts and costs included sick leave and sick pay, missed work and income loss for caregivers, and missed school or childcare for children ( table 4 and the supplemental Excel file ). People with CFBE were excluded from all of the studies in table 4 below. In studies reporting in currencies other than the €, costs were converted to € based on the average exchange rate for the year in which the study was conducted.

Bronchiectasis-related medical costs and indirect impacts and costs (individual studies)

No review to date has systematically evaluated the overall disease burden of bronchiectasis. Here, we present the first systematic literature review that comprehensively describes the clinical and socioeconomic burden of bronchiectasis overall and across individual aetiologies and associated diseases. A total of 338 publications were included in the final analysis. Together, the results indicate that the burden of clinically significant bronchiectasis on patients and their families, as well as on healthcare systems, is substantial, highlighting the urgent need for new disease-modifying therapies for bronchiectasis.

Bronchiectasis is associated with genetic, autoimmune, airway and infectious disorders. However, in many patients with bronchiectasis, an underlying aetiology cannot be identified (idiopathic bronchiectasis) [ 1 , 3 , 4 ]. This is supported by the results of this systematic literature review, in which up to 80.7% of patients were reported to have idiopathic bronchiectasis. The results are in line with those reported in a systematic literature review of bronchiectasis aetiology conducted by G ao et al. [ 13 ] (studies from Asia, Europe, North and South America, Africa and Oceania included) in which an idiopathic aetiology was reported in approximately 45% of patients with bronchiectasis, with a range of 5–82%. The maximum of 80.7% of patients with idiopathic bronchiectasis identified by this systematic literature review is much higher than in the recent report on the disease characteristics of the EMBARC where idiopathic bronchiectasis was the most common aetiology and reported in only ∼38% of patients with bronchiectasis [ 17 ]. This highlights the importance of sample size and geographic variation (80.7% reported from a single-country study with a small sample size versus ∼38% reported from a continent-wide study with a large sample size). Nevertheless, identifying the underlying aetiology is a recommendation of bronchiectasis guidelines as this can considerably alter the clinical management and prognosis [ 23 , 110 ]. Specific therapeutic interventions may be required for specific aetiologies, such as ICS for people with asthma-related bronchiectasis, antifungal treatment for those with ABPA-associated bronchiectasis and immunoglobulin replacement therapy for those with common variable immunodeficiency-related bronchiectasis [ 23 , 111 ]. Indeed, an observational study has shown that identification of the underlying aetiology affected management in 37% of people with bronchiectasis [ 112 ]. Future studies to determine the impact of identifying the underlying aetiology on management and prognosis are needed to fully understand its importance.

Patients with bronchiectasis experienced a significant symptom burden, with dyspnoea, cough, wheezing, sputum production and haemoptysis reported most commonly. These symptoms were also reported in children with bronchiectasis at slightly lower frequencies. Dealing with bronchiectasis symptoms are some of the greatest concerns from a patient's perspective. In a study assessing the aspects of bronchiectasis that patients found most difficult to deal with, sputum, dyspnoea and cough were the first, fifth and sixth most common answers, respectively [ 113 ]. Some aetiologies were reported to have a higher prevalence of certain symptoms. For example, in single studies, patients with PCD-related bronchiectasis were found to have a significantly higher prevalence of cough and wheezing [ 39 ], patients with COPD-related bronchiectasis were found to have a significantly higher prevalence of sputum production [ 41 ], and patients with post-TB bronchiectasis were found to have a higher prevalence of haemoptysis [ 30 ] compared with other aetiologies. Together, these results highlight the need for novel treatments that reduce the symptom burden of bronchiectasis. They also highlight the importance of teaching patients to perform and adhere to regular nonpharmacological interventions, such as airway clearance using physiotherapy techniques, which have been shown to improve cough-related health status and chronic sputum production [ 110 ]. Future studies assessing when airway clearance techniques should be started, and which ones are the most effective, are a research priority [ 113 ].

The burden of exacerbations in patients with bronchiectasis was high, with patients experiencing three or more exacerbations in the previous year (up to 73.6%), per year (up to 55.6%) or in the first year of follow-up (up to 32.4%). Few studies reported significant differences between aetiologies. Importantly, exacerbations are the second-most concerning aspect of bronchiectasis from the patient's perspective [ 113 ]. Patients with frequent exacerbations have more frequent hospitalisations and increased 5-year mortality [ 114 ] and exacerbations are also associated with poorer quality of life [ 114 , 115 ]. Therefore, prevention of exacerbations is of great importance in the management of bronchiectasis [ 116 ]. The exact cause of exacerbations in bronchiectasis (believed to be multifactorial) is not fully understood due a lack of mechanistic studies [ 116 ]. Future studies into the causes and risk factors for exacerbations [ 113 ] may lead to improvements in their prevention.

Many patients with bronchiectasis, including children, experienced chronic infections with bacterial pathogens such as P. aeruginosa , H. influenzae , Sta. aureus and Str. pneumoniae as well as non-tuberculous mycobacteria. Importantly, P. aeruginosa infection was significantly associated with more severe disease, reduced lung function and quality of life, and increased exacerbations, hospital admission, morality, HCRU and healthcare costs. Due to the clear and consistent association between P. aeruginosa and poor outcomes, patients with chronic P. aeruginosa colonisation should be considered to be at a higher risk of bronchiectasis-related complications [ 110 ]. Additionally, regular sputum microbiology screening should be performed in people with clinically significant bronchiectasis to detect new isolation of P. aeruginosa [ 110 ]; in which case, patients should be offered eradication antibiotic treatment [ 23 ]. Eradication of P. aeruginosa is not only of clinical importance, but also of economic importance due to the associated HCRU and healthcare costs. As such, a better understanding of the key factors leading to P. aeruginosa infection is a priority for future research [ 113 ].

Bronchiectasis markedly impacted HRQoL across several PROs including the SGRQ, Quality of Life–Bronchiectasis score, LCQ, COPD Assessment Test and Bronchiectasis Health Questionnaire. In children with bronchiectasis, significantly lower quality of life (according to the Paediatric Quality of Life Inventory score) compared with age-matched controls was reported [ 53 ]. The majority of studies reporting HRQoL in individual aetiologies and associated diseases either reported in a single aetiology, did not perform any statistical analyses to compare aetiologies, or reported no significant differences across aetiologies. Patients also experienced mild-to-moderate anxiety and depression according to the HADS-Anxiety, HADS-Depression and 9-question Patient Health Questionnaire scores, with very limited data reported in individual aetiologies. When compared with healthy controls, anxiety and depression were found to be significantly more prevalent in patients with bronchiectasis [ 55 ]. Additionally, exercise capacity was reduced, with patients with bronchiectasis reported to spend significantly less time on activities of moderate and vigorous intensity and have a significantly lower step count per day compared with healthy controls [ 89 ]. Improvements in anxiety, depression and exercise capacity are important priorities for people with bronchiectasis; in a study assessing the aspects of bronchiectasis that patients found most difficult to manage, “not feeling fit for daily activities”, anxiety and depression were the fourth, eighth and ninth most common answers, respectively [ 113 ].

The studies relating to HCRU and costs in this review were heterogeneous in terms of methodology, time period, country and currency, making them challenging to compare. Nevertheless, this study found that HCRU was substantial, with patients reporting a maximum of 1.3 hospitalisation, 1.3 ED and 21.0 outpatient visits per year. Length of stay was found to be significantly longer in patients with bronchiectasis compared with patients with any other respiratory illness in one study [ 91 ]. In another study, patients with bronchiectasis reported significantly more specialist appointments (radiologist appointments and chest physician appointments) compared with matched controls [ 85 ]. Patients with bronchiectasis also experienced a significant treatment burden, with up to 36.4, 58.0 and 83.0% of patients receiving long-term inhaled antibiotics, oral antibiotics and macrolides, respectively, up to 80.4% receiving long-term ICS and up to 61.7% and 81.4% receiving long-term long-acting muscarinic antagonists and long-acting beta agonists, respectively. Wide ranges of treatment use were reported in this study, which may reflect geographic variation in treatment patterns. Heterogeneous treatment patterns across Europe were observed in the EMBARC registry data with generally higher medication use in the UK and Northern/Western Europe and lower medication use in Eastern Europe (inhaled antibiotics: 1.8–8.9%; macrolides: 0.9–24.4%; ICS: 37.2–58.5%; long-acting beta agonists: 42.7–52.8%; long-acting muscarinic antagonists: 26.5–29.8%) [ 17 ]. Similarly, data from the Indian bronchiectasis registry indicate that the treatment of bronchiectasis in India is also diverse [ 19 ]. Furthermore, in a comparison of the European and Indian registry data, both long-term oral and inhaled antibiotics were more commonly used in Europe compared with India [ 19 ].

Cost varied widely across studies. However, patients, payers and healthcare systems generally accrued substantial medical costs due to hospitalisations, ED visits, outpatient visits, hospital-in-the-home and treatment-related costs. Other medical costs incurred included physiotherapy and outpatient remedies (including breathing or drainage techniques), outpatient medical aids (including nebulisers and respiration therapy equipment) and the cost of attending convalescence centres. Only one study compared the medical costs in patients with bronchiectasis and matched controls (age, sex and comorbidities) and found that patients with bronchiectasis had significantly higher total direct medical expenditure, hospitalisation costs, treatment costs for certain medications and costs associated with outpatient remedies and medical aids [ 85 ]. Bronchiectasis was also associated with indirect impacts and costs, including sick leave, sick pay and income lost due to absenteeism and missed work, and lost wages for caregivers of patients with bronchiectasis. Children with bronchiectasis also reported absenteeism from school or childcare.

Our findings regarding HRCU and costs in bronchiectasis are mirrored by a recent systematic literature review by R oberts et al . [ 117 ] estimating the annual economic burden of bronchiectasis in adults and children over the 2001–2022 time period. R oberts et al . [ 117 ] found that annual total healthcare costs per adult patient ranged from €3027 to €69 817 (costs were converted from USD to € based on the average exchange rate in 2021), predominantly driven by hospitalisation costs. Likewise, we report annual costs per patient ranging from €218 to €51 033, with annual hospital costs ranging from €1215 to €27 612 (adults and children included) ( table 4 ). Further, R oberts et al . [ 117 ] reports a mean annual hospitalisation rate ranging from 0.11 to 2.9, which is similar to our finding of 0.03–1.3 hospitalisations per year ( table 3 ). With regard to outpatient visits, R oberts et al . [ 117 ] reports a mean annual outpatient respiratory physician attendance ranging from 0.83 to 6.8 visits, whereas we report a maximum of 21 visits per year ( table 3 ). It should be noted, however, that our value is not restricted to visits to a respiratory physician. With regard to indirect annual costs per adult patient, R oberts et al . [ 117 ] reports a loss of income because of illness of €1109–€2451 (costs were converted from USD to € based on the average exchange rate in 2021), whereas we report a figure of ∼€1410 ( table 4 ). Finally, burden on children is similarly reported by us and R oberts et al . [ 117 ], with children missing 12 days of school per year per child ( table 4 ).

Limitations of this review and the existing literature

Due to the nature of this systematic literature review, no formal statistical analyses or formal risk of bias assessments were performed.

Several limitations within the existing literature were identified. Firstly, the vast majority of studies reported patients with NCFBE overall, with limited availability of literature reporting on individual aetiologies and associated disease. Furthermore, where this literature was available, it was limited to a handful of individual aetiologies and associated diseases, and in many of these studies, no statistical analyses to compare different aetiologies and associated disease were performed. Additionally, the methods used to determine aetiologies within individual studies may have differed. Literature on NCFBE and CFBE has traditionally been very distinct; as such, most of the studies included in this review have excluded people with CF. As the general term “CF lung disease” was not included in our search string in order to limit the number of hits, limited data on CFBE are included in this review. Bronchiectasis remains largely under-recognised and underdiagnosed, thus limiting the availability of literature. There is a particular knowledge gap with respect to paediatric NCFBE; however, initiatives such as the Children's Bronchiectasis Education Advocacy and Research Network (Child-BEAR-Net) ( www.improvebe.org ) are aiming to create multinational registries for paediatric bronchiectasis.

There were variations in the amount of literature available for the individual burdens. While there was more literature available on the clinical burden of bronchiectasis, economic data (related to both medical costs and indirect costs) and data on the impact of bronchiectasis on families and caregivers, were limited. Additionally, cost comparisons across studies and populations were difficult due to differences in cost definitions, currencies and healthcare systems.

Sample sizes of the studies included in this systematic literature review varied greatly, with the majority of studies reporting on a small number of participants. Furthermore, many of the studies were single-centre studies, thus limiting the ability to make generalisations about the larger bronchiectasis population, and cross-sectional, thus limiting the ability to assess the clinical and socioeconomic burden of bronchiectasis over a patient's lifetime. Furthermore, there may be potential sex/gender bias in reporting that has not been considered in this systematic literature review.

Finally, for many of the reported outcomes, data varied greatly across studies, with wide estimates for the frequency of different aetiologies and comorbidities as well as disease characteristics such as exacerbations and healthcare costs noted. This reflects the heterogeneity of both the study designs (including sample size and inclusion and exclusion criteria) and the study populations themselves. Additionally, the use of non-standardised terms across articles posed a limitation for data synthesis. Systematic collection of standardised data across multiple centres, with standardised inclusion and exclusion criteria such as that being applied in international registries, is likely to provide more accurate estimates than those derived from small single-centre studies.

  • Conclusions

Collectively, the evidence identified and presented in this systematic literature review show that bronchiectasis imposes a significant clinical and socioeconomic burden on patients and their families and employers, as well as on healthcare systems. Disease-modifying therapies that reduce symptoms, improve quality of life, and reduce both HCRU and overall costs are urgently needed. Further systematic analyses of the disease burden of specific bronchiectasis aetiologies and associated disease (particularly PCD-, COPD- and post-TB-associated bronchiectasis, which appear to impose a greater burden in some aspects) and paediatric bronchiectasis (the majority of data included in this study were obtained from adults) may provide more insight into the unmet therapeutic needs for these specific patient populations.

Questions for future research

Further research into the clinical and socioeconomic burden of bronchiectasis for individual aetiologies and associated diseases is required.

  • Supplementary material

Supplementary Material

Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.

Supplementary figures and tables ERR-0049-2024.SUPPLEMENT

Supplementary Excel file ERR-0049-2024.SUPPLEMENT

  • Acknowledgements

Laura Cottino, PhD, of Nucleus Global, provided writing, editorial support, and formatting assistance, which was contracted and funded by Boehringer Ingelheim.

Provenance: Submitted article, peer reviewed.

Conflict of interest: The authors meet criteria for authorship as recommended by the International Committee of Medical Journal Editors (ICMJE). J.D. Chalmers has received research grants from AstraZeneca, Boehringer Ingelheim, GlaxoSmithKline, Gilead Sciences, Grifols, Novartis, Insmed and Trudell, and received consultancy or speaker fees from Antabio, AstraZeneca, Boehringer Ingelheim, Chiesi, GlaxoSmithKline, Insmed, Janssen, Novartis, Pfizer, Trudell and Zambon. M.A. Mall reports research grants paid to their institution from the German Research Foundation (DFG), German Ministry for Education and Research (BMBF), German Innovation Fund, Vertex Pharmaceuticals and Boehringer Ingelheim; consultancy fees from AbbVie, Antabio, Arrowhead, Boehringer Ingelheim, Enterprise Therapeutics, Kither Biotec, Prieris, Recode, Santhera, Splisense and Vertex Pharmaceuticals; speaker fees from Vertex Pharmaceuticals; and travel support from Boehringer Ingelheim and Vertex Pharmaceuticals. M.A. Mall also reports advisory board participation for AbbVie, Antabio, Arrowhead, Boehringer Ingelheim, Enterprise Therapeutics, Kither Biotec, Pari and Vertex Pharmaceuticals and is a fellow of ERS (unpaid). P.J. McShane is an advisory board member for Boehringer Ingelheim's Airleaf trial and Insmed's Aspen trial. P.J. McShane is also a principal investigator for clinical trials with the following pharmaceutical companies: Insmed: Aspen, 416; Boehringer Ingelheim: Airleaf; Paratek: oral omadacycline; AN2 Therapeutics: epetraborole; Renovian: ARINA-1; Redhill; Spero; and Armata. K.G. Nielsen reports advisory board membership for Boehringer Ingelheim. M. Shteinberg reports having received research grants from Novartis, Trudell Pharma and GlaxoSmithKline; travel grants from Novartis, Actelion, Boehringer Ingelheim, GlaxoSmithKline and Rafa; speaker fees from AstraZeneca, Boehringer Ingelheim, GlaxoSmithKline, Insmed, Teva, Novartis, Kamada and Sanofi; and advisory fees (including steering committee membership) from GlaxoSmithKline, Boehringer Ingelheim, Kamada, Syncrony Medical, Zambon and Vertex Pharmaceuticals. M. Shteinberg also reports data and safety monitoring board participation for Bonus Therapeutics, Israel and is an ERS Task Force member on bronchiectasis guideline development. S.D. Sullivan has participated in advisory boards for Boehringer Ingelheim and has research grants from Pfizer, Bayer and GlaxoSmithKline. S.H. Chotirmall is on advisory boards for CSL Behring, Boehringer Ingelheim and Pneumagen Ltd, served on a data and safety monitoring board for Inovio Pharmaceuticals Inc., and has received personal fees from AstraZeneca and Chiesi Farmaceutici.

Support statement: This systematic literature review was funded by Boehringer Ingelheim International GmbH. The authors did not receive payment related to the development of the manuscript. Boehringer Ingelheim was given the opportunity to review the manuscript for medical and scientific accuracy as well as intellectual property considerations. Funding information for this article has been deposited with the Crossref Funder Registry .

  • Received March 8, 2024.
  • Accepted June 4, 2024.
  • Copyright ©The authors 2024

This version is distributed under the terms of the Creative Commons Attribution Licence 4.0.

  • Murray MP ,
  • Chalmers JD ,
  • Aliberti S ,
  • McShane PJ ,
  • Naureckas ET ,
  • Tino G , et al.
  • Martins M ,
  • Chalmers JD
  • Chotirmall SH ,
  • Sun X , et al.
  • Curtis JR , et al.
  • Monteagudo M ,
  • Rodríguez-Blanco T ,
  • Barrecheguren M , et al.
  • Millett ERC ,
  • Joshi M , et al.
  • Ringshausen FC ,
  • de Roux A ,
  • Diel R , et al.
  • Liu S-X , et al.
  • Weycker D ,
  • Hansen GL ,
  • Shteinberg M ,
  • Adir Y , et al.
  • Aksamit TR ,
  • O'Donnell AE ,
  • Barker A , et al.
  • Polverino E ,
  • Crichton ML , et al.
  • Talwar D , et al.
  • Chalmers JD , et al.
  • Visser SK ,
  • Fox GJ , et al.
  • Sullivan AL ,
  • Goeminne PC ,
  • McDonnell MJ , et al.
  • Liberati A ,
  • Altman DG ,
  • Tetzlaff J , et al.
  • Scioscia G ,
  • Alcaraz-Serrano V , et al.
  • Bilotta M ,
  • Bartoli ML , et al.
  • Rosales-Mayor E ,
  • Benegas M , et al.
  • Mackay IM ,
  • Sloots TP , et al.
  • Alcaraz-Serrano V ,
  • Gimeno-Santos E ,
  • Scioscia G , et al.
  • Al-Harbi A ,
  • Al-Ghamdi M ,
  • Khan M , et al.
  • de Gracia J ,
  • Giron R , et al.
  • Sunjaya A ,
  • Reddel H , et al.
  • Raguer L , et al.
  • Martinez-Garcia MÁ ,
  • Athanazio R ,
  • Gramblicka G , et al.
  • Ailiyaer Y ,
  • Zhang Y , et al.
  • Stockley R ,
  • De Soyza A ,
  • Gunawardena K , et al.
  • de la Rosa Carrillo D ,
  • Navarro Rolon A ,
  • Girón Moreno RM , et al.
  • de la Rosa D ,
  • Martínez-Garcia M-A ,
  • Giron RM , et al.
  • Sharif S , et al.
  • Pottier H ,
  • Marquette CH , et al.
  • Nagelschmitz J ,
  • Kirsten A , et al.
  • Artaraz A ,
  • Crichton ML ,
  • Finch S , et al.
  • Aksamit T ,
  • Bandel TJ , et al.
  • Liu R , et al.
  • Olveira C ,
  • Olveira G ,
  • Gaspar I , et al.
  • Goeminne P ,
  • Aliberti S , et al.
  • Chalmers J ,
  • Dimakou K , et al.
  • Mitchelmore P ,
  • Rademacher J , et al.
  • Loebinger M ,
  • Menendez R , et al.
  • Bennett K , et al.
  • Barker RE , et al.
  • Zhu YN , et al.
  • Yong SJ , et al.
  • Inal-Ince D ,
  • Cakmak A , et al.
  • Araújo AS ,
  • Figueiredo MR ,
  • Lomonaco I , et al.
  • Navas-Bueno B ,
  • Casas-Maldonado F ,
  • Padilla-Galo A , et al.
  • Li T , et al.
  • Gatheral T ,
  • Sansom B , et al.
  • Leem AY , et al.
  • Bellelli G ,
  • Sotgiu G , et al.
  • Patel ARC ,
  • Singh R , et al.
  • Stroil-Salama E ,
  • Morgan L , et al.
  • Bradley JM ,
  • Bradbury I , et al.
  • Lo CY , et al.
  • Padilla A ,
  • Martínez-García M-Á , et al.
  • Dicker AJ ,
  • Lonergan M ,
  • Keir HR , et al.
  • Crichton M ,
  • Cassidy A , et al.
  • de Boer S ,
  • Fergusson W , et al.
  • Goeminne PC , et al.
  • Suarez-Cuartin G ,
  • Rodrigo-Troyano A , et al.
  • Abo-Leyah H , et al.
  • Blanchette CM ,
  • Stone G , et al.
  • Grimwood K ,
  • Ware RS , et al.
  • Kim HY , et al.
  • Martínez-Garcia MA ,
  • Olveira C , et al.
  • Lin CS , et al.
  • Navaratnam V ,
  • Millett ER ,
  • Hurst JR , et al.
  • Kim JM , et al.
  • Rabe KF , et al.
  • Wang LY , et al.
  • Schwartz BS ,
  • Al-Sayouri SA ,
  • Pollak JS , et al.
  • Girón Moreno RM ,
  • Sánchez Azofra A ,
  • Aldave Orzaiz B , et al.
  • Sonbahar-Ulu H , et al.
  • Nawrot TS ,
  • Ruttens D , et al.
  • Muirhead CR ,
  • Hubbard RB , et al.
  • Marchant JM ,
  • Roberts J , et al.
  • Lovie-Toon YG ,
  • Byrnes CA , et al.
  • Costa JdC ,
  • Blackall SR ,
  • King P , et al.
  • Jiang N , et al.
  • Jayaram L ,
  • Karalus N , et al.
  • McCullough AR ,
  • Tunney MM ,
  • Stuart Elborn J , et al.
  • Joschtel B ,
  • Gomersall SR ,
  • Tweedy S , et al.
  • Pizzutto SJ ,
  • Bauert P , et al.
  • Nam H , et al.
  • Navarro-Rolon A ,
  • Rosa-Carrillo D ,
  • Esquinas C , et al.
  • Seifer FD ,
  • Ji Y , et al.
  • McPhail SM ,
  • Hurley F , et al.
  • McCallum GB ,
  • Singleton RJ ,
  • Redding GJ , et al.
  • Contarini M ,
  • Shoemark A ,
  • Ozerovitch L ,
  • Masefield S ,
  • Polverino E , et al.
  • Filonenko A , et al.
  • Xu G , et al.
  • Roberts JM ,
  • Kularatna S , et al.

European Respiratory Review: 33 (173)

  • Table of Contents
  • Index by author

Thank you for your interest in spreading the word on European Respiratory Society .

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager

del.icio.us logo

  • CF and non-CF bronchiectasis
  • Tweet Widget
  • Facebook Like
  • Google Plus One

More in this TOC Section

  • Adherence-enhancing interventions for pharmacological and oxygen therapy in COPD patients
  • PM 2.5 and microbial pathogenesis in the respiratory tract

Related Articles

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

electronics-logo

Article Menu

systematic review and systematic literature review

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Blockchain forensics: a systematic literature review of techniques, applications, challenges, and future directions.

systematic review and systematic literature review

Share and Cite

Atlam, H.F.; Ekuri, N.; Azad, M.A.; Lallie, H.S. Blockchain Forensics: A Systematic Literature Review of Techniques, Applications, Challenges, and Future Directions. Electronics 2024 , 13 , 3568. https://doi.org/10.3390/electronics13173568

Atlam HF, Ekuri N, Azad MA, Lallie HS. Blockchain Forensics: A Systematic Literature Review of Techniques, Applications, Challenges, and Future Directions. Electronics . 2024; 13(17):3568. https://doi.org/10.3390/electronics13173568

Atlam, Hany F., Ndifon Ekuri, Muhammad Ajmal Azad, and Harjinder Singh Lallie. 2024. "Blockchain Forensics: A Systematic Literature Review of Techniques, Applications, Challenges, and Future Directions" Electronics 13, no. 17: 3568. https://doi.org/10.3390/electronics13173568

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Open access
  • Published: 31 August 2024

Impaired glucose metabolism and the risk of vascular events and mortality after ischemic stroke: A systematic review and meta-analysis

  • Nurcennet Kaynak   ORCID: orcid.org/0000-0002-0637-8421 1 , 2 , 3 , 4 , 5 ,
  • Valentin Kennel   ORCID: orcid.org/0009-0000-0354-4167 1 , 2 ,
  • Torsten Rackoll   ORCID: orcid.org/0000-0003-2170-5803 2 , 6 ,
  • Daniel Schulze   ORCID: orcid.org/0000-0001-9415-2555 7 ,
  • Matthias Endres   ORCID: orcid.org/0000-0001-6520-3720 1 , 2 , 4 , 5 , 8 &
  • Alexander H. Nave   ORCID: orcid.org/0000-0002-0101-4557 1 , 2 , 3 , 5  

Cardiovascular Diabetology volume  23 , Article number:  323 ( 2024 ) Cite this article

1 Altmetric

Metrics details

Diabetes mellitus (DM), prediabetes, and insulin resistance are highly prevalent in patients with ischemic stroke (IS). DM is associated with higher risk for poor outcomes after IS.

Investigate the risk of recurrent vascular events and mortality associated with impaired glucose metabolism compared to normoglycemia in patients with IS and transient ischemic attack (TIA).

Systematic literature search was performed in PubMed, Embase, Cochrane Library on 21st March 2024 and via citation searching. Studies that comprised IS or TIA patients and exposures of impaired glucose metabolism were eligible. Study Quality Assessment Tool was used for risk of bias assessment. Covariate adjusted outcomes were pooled using random-effects meta-analysis.

Main outcomes

Recurrent stroke, cardiac events, cardiovascular and all-cause mortality and composite of vascular outcomes.

Of 10,974 identified studies 159 were eligible. 67% had low risk of bias. DM was associated with an increased risk for composite events (pooled HR (pHR) including 445,808 patients: 1.58, 95% CI 1.34–1.85, I 2  = 88%), recurrent stroke (pHR including 1.161.527 patients: 1.42 (1.29–1.56, I 2  = 92%), cardiac events (pHR including 443,863 patients: 1.55, 1.50–1.61, I 2  = 0%), and all-cause mortality (pHR including 1.031.472 patients: 1.56, 1.34–1.82, I 2  = 99%). Prediabetes was associated with an increased risk for composite events (pHR including 8,262 patients: 1.50, 1.15–1.96, I 2  = 0%) and recurrent stroke (pHR including 10,429 patients: 1.50, 1.18–1.91, I 2  = 0), however, not with mortality (pHR including 9,378 patients, 1.82, 0.73–4.57, I 2  = 78%). Insulin resistance was associated with recurrent stroke (pHR including 21,363 patients: 1.56, 1.19–2.05, I 2  = 55%), but not with mortality (pHR including 21,363 patients: 1.31, 0.66–2.59, I 2  = 85%).

DM is associated with a 56% increased relative risk of death after IS and TIA. Risk estimates regarding recurrent events are similarly high between prediabetes and DM, indicating high cardiovascular risk burden already in precursor stages of DM. There was a high heterogeneity across most outcomes.

Introduction

Ischemic stroke (IS) is associated with high mortality and high risk of recurrent vascular events worldwide [ 1 , 2 , 3 ]. Despite adequate secondary prevention, about 11% of patients suffer a recurrent stroke within the first year [ 4 ]. Diabetes mellitus (DM) is a highly prevalent cardiovascular risk factor and is present in about one-third of IS patients [ 5 , 6 ]. Stroke prevention guidelines recommend screening for unrecognized DM after IS [ 7 ]. Besides DM, other forms of impaired glucose metabolism (IGM), such as prediabetes and insulin resistance (IR) have been gaining importance over the last decades in terms of their association with increased cardiovascular risk [ 8 ]. Prediabetes, comprising impaired fasting glucose and impaired glucose tolerance, represents a hyperglycemic condition of patients not yet within the diabetic range [ 9 ]. In comparison, IR constitutes a pathophysiological mechanism, which usually precedes and coexists with both DM and prediabetes [ 10 ]. Observational studies report that 70% of the patients with IS have either DM (46%) or prediabetes (24%), and 50% of those who have no DM at baseline have IR [ 11 , 12 ].

Considering that the majority of patients with stroke have some form of IGM, it represents an important aspect of secondary stroke prevention. Numerous studies, including systematic reviews, have shown the association between DM and prediabetes and stroke recurrence [ 13 , 14 , 15 ]. However, only few studies have looked at composite vascular events as an outcome. Furthermore, mortality risk associated with DM after stroke has not been addressed in previous meta-analyses. A comprehensive systematic approach is needed to identify and compare risks associated with composite vascular events and mortality after IS and TIA between different forms of IGM.

Stroke prevention guidelines recommend the use of new generation antidiabetics based on the finding that these agents demonstrated cardiovascular protective effects in patients with previous cardiovascular disease including stroke [ 7 ]. However, only the minority of patients had a history of stroke and subgroup analyses of patients with a previous IS or TIA remained mostly inconclusive [ 16 , 17 ]. In contrast, in the IRIS Trial only patients with IR and a recent IS or TIA were included [ 18 ]. Despite the lower risk of cardiovascular events associated with pioglitazone, the high risk of adverse events restricted the clinical implication of the drug. Currently, it remains unclear which pharmacological treatments are beneficial in terms of secondary stroke prevention in patients with acute or subacute IS or TIA and different forms of IGM.

Identifying increased cardiovascular risk not only in DM but also other forms of IGM would capture a greater population at risk and eventually prompt implementation of secondary preventive measures. We conducted a systematic literature review and meta-analysis to extend our knowledge on the burden of IGM in patients with IS and TIA in the context of cardiovascular events and mortality.

This manuscript adheres to the PRISMA guideline [ 19 ]. Study protocol was pre-registered in open science framework in 2021 [ 20 ].

Information sources

We conducted a systematic literature search on Medline via Pubmed, Ovid via Embase, and Cochrane Library that was last updated on March 21, 2024. Search terms included “diabetes”, “prediabetes”, “insulin resistance”, “stroke” and “transient ischemic attack”, restricted to English language. See full search strategy in supplementary material methods. Reference lists of previous systematic reviews and of studies included in our review were searched manually.

Study selection and data extraction

Screening was performed by two reviewers independently (NK and VK) and consensus was reached with two additional reviewers (TR and AHN) in case of disagreement. Eligible studies were observational studies that included patients within 3 months after an IS or TIA and reported at least one of the following outcomes: composite vascular events, recurrent stroke, cardiovascular and all-cause mortality, cardiac events including but not limited to myocardial infarction, all regardless of follow-up duration (see supplementary Table 1 for the eligibility criteria). Composite events comprised at least stroke, cardiac events, and cardiovascular death. Studies were required to report hazard ratios (HR), odds ratios (OR), or risk ratios using a multivariable model. Exposures of interest were DM, prediabetes and IR, which were included independently of the definition used in the respective study. Additionally, we screened for studies that compared the use of an antidiabetic therapy to placebo or another antidiabetic therapy within the same population and outcomes mentioned above, regardless of study design.

Data extraction and assessment of risk of bias were performed by one reviewer (NK) and the internal validity was checked with a second reviewer (VK) for a random sample of 10% of studies. Interrater reliability was calculated. Authors were contacted via email if substantial outcome data were lacking, unclear or discrepant. Risk of bias assessment was made using the Study Quality Assessment Tool of National Heart, Lung, and Blood Institute [ 21 ]. A detailed methodological description can be found in the methods section of the supplementary material.

Data synthesis

We performed random effects meta-analyses with the restricted maximum likelihood estimator method after grouping studies into outcome measures HR for each study outcome. OR were pooled using meta-regression with follow-up duration as moderator and with random effects meta-analysis if moderator showed no significant effect (p < 0.05). Studies used different sets of covariates that included sociodemographic and clinical characteristics. We included the effect size from the models with the most adjusting factors available. We calculated the 95% confidence interval (CI) and prediction intervals. Prediction intervals describe the expected range of future study results, while confidence intervals relate to the precision of the aggregated effect. Multi-level meta-analysis was performed if multiple subgroups from a single study were included in the analysis. Furthermore, we performed meta-analyses of absolute risks derived from event numbers for each outcome and exposure group, whenever such data were reported. Heterogeneity was assessed using Cochran’s Q and I 2 and was assumed present when p < 0.05 or I 2  > 50% [ 22 ]. Results of meta-analyses were visualized using forest plots. Subgroup analyses were conducted based on history of previous stroke (first-ever event, yes/no) and type of ischemic event (IS/TIA/both). Subgroup analyses based on sex were not conducted because the studies included both sexes in their analyses, and individual patient data were not available. As a sensitivity analysis, we conducted meta-analyses using unadjusted odds ratios. Publication bias was assessed by funnel plots and Egger´s regression. Statistical calculations were performed using the Software R Version 4.0.2 with the package “Metafor” [ 23 ]. Studies investigating the association between antidiabetic therapies and recurrent cardiovascular events after IS or TIA were summarized narratively.

Systematic literature search

The systematic literature search yielded 10,974 records. After screening titles and abstracts, 8,219 records were excluded, and 1,717 records were further screened based on full texts (Fig.  1 ). Finally, 159 studies met the eligibility criteria (supplementary references). Of those, 26 reported data for composite outcome, 71 for recurrent stroke, 10 for cardiac events, 104 for all-cause mortality, and five for cardiovascular mortality (Table  1 ). During data extraction an inter-rater reliability of 90% was reached. Authors of twenty-six studies were contacted for missing information, and seven of them provided the requested data. Most studies were observational studies (n = 146), and others were post-hoc analyses of randomized trials (n = 13). Follow-up duration ranged from end-of-hospital-stay to longer than 20 years. The diagnostic criteria used for DM varied highly including based on medical records or medication history only (n = 61), laboratory biomarkers only (n = 14) and both (n = 50). Twenty-one studies did not report the definition used. Prediabetes was defined either according to American Diabetes Association [ 24 ] or World Health Organization criteria [ 25 ], whereas one study defined prediabetes as a non-fasting glucose level of 140–198 mg/dL. IR was quantified using: HOMA-IR, Triglyceride-Glucose Index, Matsuda Insulin Sensitivity Index, Glucose/Insulin Ratio, QUICKI Index, and estimated glucose disposal rate. Overall, 67% (n = 107) of the included studies were rated as having good quality of evidence, 27% (n = 43) as fair and 6% (n = 9) as poor (supplementary Fig. 1). Study characteristics are presented in supplementary Table 2.

figure 1

Flowchart of the screening and selection process of the systematic review

Association of IGM with cardiovascular events

Composite vascular events.

Twenty-four studies were eligible for the exposure DM, three studies for prediabetes and two studies for IR. Five studies reporting data from the same cohort were excluded, resulting in 19 eligible studies for the exposure DM (16 reported HR, three reported OR; see supplementary Table 3). Except for one study reporting a 3-month follow-up period, all studies reported at least 1-year follow-up. One study that assessed incident DM during follow-up opposed to pre-existing DM as an exposure was not included in the analysis [ 26 ].

Presence of DM was statistically significantly associated with an increased risk of composite vascular events with a pooled HR (pHR) of 1.58 (95% confidence interval (CI) 1.34 to 1.85, I 2  = 88%) including 445,808 patients (Fig.  2 A) and a pooled OR (pOR) of 1.87 (95% CI 0.76 to 4.60, I 2  = 64%) including 1,609 patients. No publication bias was observed (supplementary Fig. 2). The meta-analysis of absolute risks reported in seven studies revealed that during a mean follow-up of three years, 43% (95% CI 23% to 64%) of stroke patients with DM reached a composite endpoint of a recurrent cardiovascular event or death. This rate was 17% (95% CI 3% to 31%) in patients without DM (supplementary Table 4).

figure 2

a Forest plot for the meta-analysis of studies that reported the association of diabetes with composite outcome. b Forest plot for the meta-analysis of studies that reported the association of prediabetes with composite outcome

Meta-analysis of two studies showed an increased risk of composite events associated with prediabetes with a pHR of 1.50 (95% CI 1.15 to 1.96, I 2  = 0%; Fig.  2 B) in 8,262 patients. An absolute risk of 31% (95% CI 12% to 50%) and 7% (95% CI 5% to 10%) was observed in the group of patients with and without prediabetes, respectively. IR was reported in two studies, which were derived from the same cohort. One of the studies demonstrated no association between high IR and composite vascular events [ 27 ]. In the other study, which only encompassed patients without DM, increased IR based on HOMA-IR was statistically significantly associated with an increased risk for vascular events [ 28 ].

Recurrent stroke

Sixty-three studies reported recurrent stroke outcome data in patients with DM, see supplementary Table 5. Follow-up duration ranged from discharge from hospital to a mean follow-up time of 12.3 years. Studies encompassing the same population were excluded from the analysis. Finally, 40 studies reporting HR and 12 studies reporting OR were eligible for analysis, respectively. The pHR was 1.42 (95% CI 1.29 to 1.56, I 2  = 92%; Fig.  3 A) involving 1.161.527 patients. There was evidence for possible publication bias (supplementary Fig. 3). Studies that reported OR involving 47,629 patients showed a similar increase of risk (pOR 1.33, 95% CI 1.13 to 1.56, I 2  = 48%; supplementary Fig. 4). Follow-up duration was not a statistically significant moderator for the outcome (p = 0.40). Neither the type of baseline event (IS or TIA), nor previous stroke was a statistically significant moderator (p = 0.08 and p = 0.90, respectively, see supplementary Fig. 5) in subgroup analyses. Baujat plots revealed that the studies contributing most to heterogeneity had a design of post-hoc analysis of randomized trials. Meta-analysis of absolute risks extracted from 23 studies resulted in 13% (95% CI 10% to 16%) for patients with diabetes vs. 9% (95% CI 6% to 11%) without, within a follow-up period of more than a year.

figure 3

a Forest plot for the meta-analysis of studies that reported the association of diabetes with recurrent stroke. b Forest plot for the meta-analysis of studies that reported the association of prediabetes with recurrent stroke. c Forest plot for the meta-analysis of studies that reported the association of insulin resistance with recurrent stroke

Patients with prediabetes had an increased risk for recurrent stroke compared to patients with normoglycemia (pHR in 10,429 patients 1.50, 95% CI 1.18 to 1.91, I 2  = 0%, see Fig.  3 B). This was also the case in terms of absolute risk 10% (95% CI 8% to 12%) and 7% (95% CI 7% to 8%), respectively. Of five studies eligible for IR, only three could be included in the meta-analysis, because multiple studies were conducted in the same cohort. The pHR for recurrent stroke associated with IR in 21,363 patients was 1.56, 95% CI 1.19 to 2.05, I 2  = 55% (Fig.  3 C). Absolute risks associated with IR during 10.4 months follow-up was 10% (95% CI 5% to 15%) vs. 7% (95% CI 6% to 7%) in patients without increased IR.

Cardiac events

All studies eligible for cardiac events comprised DM as the exposure, see supplementary Table 6. The shortest follow-up time was three months, all other studies followed patients for at least one year. One study that investigated new DM during follow-up was not included in the meta-analysis [ 26 ]. Presence of DM was associated with an increased risk of cardiac events with a pHR of 1.55 (95% CI 1.50 to 1.61, I 2  = 0%) involving 443,863 patients. The pOR of two studies with 839,029 patients was 1.47 (95% CI 0.48 to 4.44), I 2  = 89% (supplementary Fig. 6). Meta-analysis of three studies reporting data revealed an absolute risk of 5% (95% CI − 1% to 11%) in patients with DM and 3% (95% CI 0% to 6%) without DM. One study that investigated prediabetes reported a HR of 2.0 (95% CI 1.30 to 3.20) for cardiac events. No study reported IR as an exposure.

Association between IGM and mortality

Cardiovascular mortality.

Five studies reported data of cardiovascular mortality in patients with DM (supplementary Table 7). Meta-analysis involving 127,445 patients showed a statistically significant association between DM and cardiovascular mortality (pHR 1.65, 95% CI 1.41 to 1.93, I 2  = 50%, see supplementary Fig. 7). Pooling available data of absolute risks from three studies, resulted in a pooled risk of 18% (95% CI −10% to 47%) in patients with DM vs. 16% (95% CI −9% to 41%) in patients without DM, during 1 year of follow-up.

All-cause mortality

Ninety-four studies investigated associations between all-cause mortality and DM, see supplementary Table 8. Studies that included patients from the same population were excluded from the analysis (n = 10). Presence of DM was associated with an increased risk for all-cause mortality (pHR 1.56, 95% CI 1.34 to 1.82, I 2  = 99%, see Fig.  4 A) summarizing 42 studies including 1.031.472 patients. Subgroup analyses based on follow-up duration resulted in a pHR of 1.10 (95% CI 0.72 to1.68) during hospitalization (n = 3 studies), pHR of 1.35 (95% CI 1.18 to 1.56) up to one year (n = 12 studies), and pHR of 1.74 (95% CI 1.40 to 2.17) longer than one year (n = 27 studies). However, follow-up duration was not revealed as a statistically significant moderator (p = 0.15, see supplementary Fig. 8). The Galbraith plot revealed the most influential studies to be the subgroups of the study from Zamir et al. (supplementary Fig. 9). The meta-analysis of forty-two studies involving 3.290.353 patients reporting OR showed a risk estimate of 1.30 (95% CI 1.21 to 1.41, see supplementary Fig. 10). Subgroup analyses based on first-ever vs. recurrent event at baseline and the type of ischemic event revealed no statistically significant difference between groups. Funnel plots suggested existence of publication bias (supplementary Fig. 11). During a mean follow-up of 1.8 months, the absolute risk of all-cause mortality was 23% (95% CI 14% to 31%) for patients with DM vs. 17% (95% CI 11% to 23%) without DM.

figure 4

a Forest plot for the meta-analysis of studies that reported the association of diabetes with all-cause mortality. b Forest plot for the meta-analysis of studies that reported the association of prediabetes with all-cause mortality. c Forest plot for the meta-analysis of studies that reported the association of insulin resistance with all-cause mortality

Six studies were eligible for prediabetes and all-cause mortality (3 HR, 3 OR). Prediabetes was not statistically significantly associated with an increased risk for mortality after IS (pHR 1.82, 95% CI 0.73 to 4.57, I 2  = 78% in 9,378 patients, and pOR 1.37, 95% CI 0.54 to 3.43, I 2  = 71% in 1,969 patients, see Fig.  4 B & supplementary Fig. 12). Meta-analysis of absolute risks during a mean follow-up of seven months was 8% (95% CI 2% to 15%) for patients with prediabetes vs. 9% (95% CI 0% to 18%) with normoglycemia.

Nine studies reported IR as an exposure. The meta-analyses could not demonstrate an association between increased IR and mortality (pHR 1.31, 95% CI 0.66 to 2.59, I 2  = 85%, including 21,363 patients across three studies and pOR 1.05, 95% CI 0.76 to 1.45, I 2  = 16%, including 6,434 patients across 2 studies). Absolute risks were 6% (95% CI -1% to 12%) for patients with increased IR and 4% (95% CI 2% to 6%) without.

Sensitivity analyses with crude odds ratios

Sensitivity analyses using unadjusted odds ratios, to accommodate the variation in adjustment factors used across studies, revealed similar risk estimates, though often slightly higher than the respective adjusted pooled outcomes (supplementary Fig. 13 and 14).

Antidiabetic therapy and recurrent vascular events

Nine observational studies investigated the association between antidiabetic therapies and cardiovascular events after an IS or TIA in the preceding three months, see Table  2 . The drug classes investigated were metformin, sulfonylurea, thiazolidinedione, and incretin-mimetics. We did not identify and studies with SGLT-2 Inhibitors or alfa glucosidase inhibitors. Due to the differences in the exposure and comparator groups, we did not perform a meta-analysis. Studies showed a risk reduction for recurrent stroke, mortality and composite vascular events associated with the use of pioglitazone and lobeglitazone as well as a lower risk of mortality associated with metformin use [ 29 , 30 , 31 , 32 ]. There were no clear benefits in terms of decreased risk of cardiovascular events associated with sulfonylurea or incretin-mimetics [ 33 , 34 , 35 , 36 , 37 ].

In this systematic review and meta-analysis, we provide a comprehensive and up-to-date summary of previous studies investigating the association between IGM and residual cardiovascular risk following IS and TIA. To our knowledge, this is the first meta-analysis to investigate the risk of composite vascular events associated with IGM as well as the risk of mortality associated with DM in this population. The results of the presented meta-analysis indicate that (1) patients with DM have an approximately 1.6-fold (60%) increased risk of both death and recurrent vascular events after IS and TIA, (2) the risk of recurrent vascular events after stroke is already increased in the prediabetic stage and appears just as high as in patients with DM, and (3) presence of IR is associated with recurrent stroke risk. In contrast, this meta-analysis was unable to demonstrate an increased mortality risk after stroke associated with prediabetes or IR. Overall, there were significantly fewer eligible studies on prediabetes and IR compared to DM (Table  1 ).

DM is a well-known risk factor for cardiovascular disease. The results of our study confirm a robust association between DM and risk of composite recurrent vascular events after IS and TIA. We could confirm the risk of recurrent stroke associated with DM that was previously reported in a meta-analysis by Zhang et al . [ 14 ] The risk of mortality in patients with DM is observed to be 56% higher compared to patients without DM. Although mortality risk estimates were greater for diabetic patients with increasing mean follow-up durations of studies, we could not observe a statistically significant interaction between mortality risk and follow-up duration. This could be due to the fact that there were only a few studies with short-term follow-up in studies that reported HR (supplementary Fig. 8) and only a few studies with long-term follow-up in studies that reported OR (supplementary Fig. 10). Still, inferring from this finding, DM likely remains a relevant risk factor over time and an important target for secondary prevention strategies, given the high prevalence of DM in this population [ 6 ].

Our analyses demonstrated a positive relationship between prediabetes and recurrent vascular events as well as between IR and stroke recurrence. However, there was no association detected between the two conditions and mortality. This difference could have several reasons: First, patients with prediabetes or IR are less likely to have been exposed to deleterious effects of a dysregulated glucose metabolism for a longer time, compared to patients with DM. Second, the shorter follow-up duration of studies investigating prediabetes and IR generally limits the probability to detect difference in mortality risk. The risk associated with prediabetes and recurrent stroke is in line with a previous meta-analysis conducted by Pan et al . in 2019 [ 15 ]. Despite substantial methodological differences such as avoiding pooling ORs and HRs together and excluding studies with hemorrhagic stroke in our study, also having identified two more studies, similar to Pan et al ., we also could not demonstrate a relationship between prediabetes and mortality.

Contrary to DM, prediabetes has rather recently been regarded as a cardiovascular risk factor [ 39 ]. The meta-analysis conducted by Cai et al. showed a risk increase in all-cause mortality and vascular events associated with prediabetes in population-based cohorts as well as in patients with previous atherosclerotic disease [ 40 ]. Further, a recent analysis of the UK Biobank cohort including more than 400 thousand individuals confirmed the excess risk for any cardiovascular disease in patients with IGM compared to normoglycemia [ 41 ]. The risk was higher for DM than for prediabetes. Still, after accounting for obesity and use of antihypertensive and statins both risks were attenuated, lending support to the modifiability of the excess risk. Together with these previous findings, our results strongly support considering prediabetes as a continuous entity with DM on the spectrum of IGM, with a relevant increase in cardiovascular and mortality risk.

There was a statistically significant association between increased IR and stroke recurrence. However, it should be noted that, there were only three studies eligible for the analysis and the parameters used to define an increased IR as well as the timing of measurement after stroke (7 days and 14 days) was heterogeneous between studies. IR can be increased during the acute phase of the stroke due to the stress reaction and show changes during this time [ 42 ]. The increased relative risk for recurrent stroke observed in patients with IR compared to patients without IR was higher than the relative risk in diabetics compared to non-diabetics. This might be explained by the differences in the patient groups. Patients with DM are more likely to receive antidiabetic treatment and have a higher risk of dying before suffering a recurrent stroke. Another difference could be in the comparator groups, namely that the patients without IR could be generally healthier than patients without DM.

Despite the association between increased IR and stroke recurrence, we could not identify many studies with other cardiovascular outcomes. Furthermore, we encountered different parameters and criteria to define IR across studies. Thus, prognostic value of increased IR in terms of composite cardiovascular risk as well as the best biomarker to predict the said risk remains speculative in patients with IS or TIA. Further research is needed to investigate this conundrum.

We observed a significant research gap in the number of large studies with congruent definitions of prediabetes and IR. Uncertainty remains about the different diagnostic criteria for both prediabetes and IR [ 24 , 25 , 43 , 44 ], leading to the lack of adequate implementation of preventive strategies [ 45 ]. As the prevalence of prediabetes expected to rise, the whole spectrum of IGM rather than DM alone is assumed to gain more significance in terms of primary and secondary stroke prevention [ 46 ]. Consistent diagnostic criteria would facilitate a reliable data synthesis and the development of prevention strategies.

Until the advent of the GLP1 and SGLT2 therapies, no antidiabetic therapy has improved cardiovascular risk or death despite improvements in glucose control [ 47 ]. Both classes of drugs revolutionized the field after randomized controlled trials showed cardiovascular risk reduction in patients with DM [ 48 , 49 , 50 , 51 ]. However, until now, it is unclear if these drugs are equally effective at reducing cardiovascular risk in patients with IS [ 33 , 34 , 52 ]. As our systematic review indicates, to date, only few studies exist that investigated the effectiveness of antidiabetic therapy in preventing recurrent vascular events after an acute or subacute IS. Even though the promising results related to pioglitazone use in patients with IR from the IRIS trial unfortunately faced a limitation due to side effects [ 18 ], recent cohort studies shown beneficial effects associated with thiazolidinediones [ 29 , 30 ]. Clinical trial investigating secondary stroke prevention in patients with prediabetes are yet to been undertaken.

Strengths and limitations

The most important strength of our study lies in the comprehensiveness, encompassing over 10.000 records and having included more than seven million patients over all exposures and outcomes. This enabled us to investigate all three entities of IGM together. Another strength constitutes the methodology. We included studies with both outcome measures HR and OR, which led us to identify more studies. We also used multi-level meta-analysis to account for multiple subgroups of the same cohorts and used meta-regression to account for moderators.

There are limitations to this study. Firstly, as in every meta-analysis, the quality of synthesized evidence depends on the quality of evidence of the individual studies. We assessed the risk of bias of the included studies and could not identify an influence of studies with high risk of bias on the effect estimates. Secondly, we encountered high heterogeneity between studies. As this systematic review included observational studies, the high variability across study populations and diagnostic criteria used was expected. Further, the fact that studies used different adjustment factors in their multivariable analyses most likely contributed substantially to the high heterogeneity. To alleviate the difference in the adjustment factors, we have conducted sensitivity analyses. Both crude odds ratios and absolute risks indicated a similar change of risk estimates to the per protocol analyses, strengthening our primary results. Another factor contributing to heterogeneity could be methodological differences between studies, such as how competing events were treated. This could not be taken into consideration when determining eligibility, since the information was mostly not available. Finally, severity and duration of DM could not be taken into consideration.

Different types of IGM are associated with increased cardiovascular risk and mortality after IS and TIA. The entities of IGM should be considered as a continuous spectrum with increased cardiovascular risk that represent an important target for early cardiovascular prevention programs.

Availability of data and materials

The extracted data from the involved studies in this systematic review have been made available in supplementary material.

Abbreviations

  • Ischemic stroke

Diabetes mellitus

Impaired glucose metabolism

  • Insulin resistance

Hazard ratios

Odds ratios

Confidence interval

GBD 2019 Stroke Collaborators. Global, regional, and national burden of stroke and its risk factors, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Neurol. 2021;20(10):795–820. https://doi.org/10.1016/S1474-4422(21)00252-0 .

Article   Google Scholar  

Chen Y, Wright N, Guo Y, et al. Mortality and recurrent vascular events after first incident stroke: a 9-year community-based study of 0·5 million Chinese adults. Lancet Glob Health. 2020;8(4):e580–90. https://doi.org/10.1016/S2214-109X(20)30069-3 .

Article   PubMed   PubMed Central   Google Scholar  

Carlsson A, Irewall AL, Graipe A, Ulvenstam A, Mooe T, Ögren J. Long-term risk of major adverse cardiovascular events following ischemic stroke or TIA. Sci Rep. 2023;13(1):8333. https://doi.org/10.1038/s41598-023-35601-x .

Article   PubMed   PubMed Central   CAS   Google Scholar  

Mohan KM, Wolfe CDA, Rudd AG, Heuschmann PU, Kolominsky-Rabas PL, Grieve AP. Risk and cumulative risk of stroke recurrence: a systematic review and meta-analysis. Stroke. 2011;42(5):1489–94. https://doi.org/10.1161/STROKEAHA.110.602615 .

Article   PubMed   Google Scholar  

Sarwar N, Gao P, Kondapally Seshasai SR, et al. Diabetes mellitus, fasting blood glucose concentration, and risk of vascular disease: a collaborative meta-analysis of 102 prospective studies. The Lancet. 2010;375(9733):2215–22. https://doi.org/10.1016/S0140-6736(10)60484-9 .

Article   CAS   Google Scholar  

Lau LH, Lew J, Borschmann K, Thijs V, Ekinci EI. Prevalence of diabetes and its effects on stroke outcomes: a meta-analysis and literature review. J Diabetes Investig. 2019;10(3):780–92. https://doi.org/10.1111/jdi.12932 .

Kleindorfer DO, Towfighi A, Chaturvedi S, et al. Guideline for the Prevention of Stroke in Patients With Stroke and Transient Ischemic Attack: A Guideline From the American Heart Association/American Stroke Association. Stroke. 2021. https://doi.org/10.1161/str.0000000000000375 .

Schlesinger S, Neuenschwander M, Barbaresko J, et al. Prediabetes and risk of mortality, diabetes-related complications and comorbidities: umbrella review of meta-analyses of prospective studies. Diabetologia. 2022;65(2):275–85. https://doi.org/10.1007/s00125-021-05592-3 .

Article   PubMed   CAS   Google Scholar  

Nathan DM, Davidson MB, DeFronzo RA, et al. Impaired fasting glucose and impaired glucose tolerance: implications for care. Diabetes Care. 2007;30(3):753–9. https://doi.org/10.2337/dc07-9920 .

Abdul-Ghani MA, Tripathy D, DeFronzo RA. Contributions of β-cell dysfunction and insulin resistance to the pathogenesis of impaired glucose tolerance and impaired fasting glucose. Diabetes Care. 2006;29(5):1130–9. https://doi.org/10.2337/dc05-2179 .

Jia Q, Zheng H, Zhao X, et al. Abnormal glucose regulation in patients with acute stroke across China: prevalence and baseline patient characteristics. Stroke. 2012;43(3):650–7. https://doi.org/10.1161/STROKEAHA.111.633784 .

Kernan WN, Inzucchi SE, Viscoli CM, et al. Impaired insulin sensitivity among nondiabetic patients with a recent TIA or ischemic stroke. Neurology. 2003;60(9):1447–51. https://doi.org/10.1212/01.WNL.0000063318.66140.A3 .

Echouffo-Tcheugui JB, Xu H, Matsouaka RA, et al. Diabetes and long-term outcomes of ischaemic stroke: findings from Get With The Guidelines-Stroke. Eur Heart J. 2018;39(25):2376–86. https://doi.org/10.1093/eurheartj/ehy036 .

Zhang L, Li X, Wolfe CDA, O’Connell MDL, Wang Y. Diabetes as an independent risk factor for stroke recurrence in ischemic stroke patients: an updated meta-analysis. Neuroepidemiology. 2021;55(6):427–35. https://doi.org/10.1159/000519327 .

Pan Y, Chen W, Wang Y. Prediabetes and outcome of ischemic stroke or transient ischemic attack: a systematic review and meta-analysis. J Stroke Cerebrovasc Dis. 2019;28(3):683–92. https://doi.org/10.1016/j.jstrokecerebrovasdis.2018.11.008 .

Strain WD, Frenkel O, James MA, et al. Effects of semaglutide on stroke subtypes in type 2 diabetes: post hoc analysis of the randomized SUSTAIN 6 and PIONEER 6. Stroke. 2022;53(9):2749–57. https://doi.org/10.1161/STROKEAHA.121.037775 .

Zhou Z, Lindley RI, Rådholm K, et al. Canagliflozin and stroke in type 2 diabetes mellitus. Stroke. 2019;50(2):396–404. https://doi.org/10.1161/STROKEAHA.118.023009 .

Kernan WN, Viscoli CM, Furie KL, et al. Pioglitazone after ischemic stroke or transient ischemic attack. N Engl J Med. 2016;374(14):1321–31. https://doi.org/10.1056/NEJMoa1506930 .

Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021. https://doi.org/10.1136/bmj.n71 .

Kaynak N, Rackoll T, Endres M, Nave AH. The residual risk of impaired glucose metabolism on vascular events and mortality after ischemic stroke and the effect of antidiabetic therapy on reducing this risk: A systematic review and Meta-analysis. 2021. https://doi.org/10.17605/OSF.IO/JVYHW .

National Heart Lung and Blood Institute. Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. 2013. Accessed December 18, 2020. https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools .

Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–60. https://doi.org/10.1136/bmj.327.7414.557 .

Viechtbauer W. metafor: Meta-analysis package for R. R package version 2.4–0. R package version 24–0. 2020;(1):1–275.

American Diabetes Association. 2. Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes-2020. Diabetes Care. 2020;43(January):S14-S31. https://doi.org/10.2337/dc20-S002 .

Geneva: World Health Organization. Classification of Diabetes Mellitus. 2019.

Rutten-Jacobs LCA, Keurlings PAJ, Arntz RM, et al. High incidence of diabetes after stroke in young adults and risk of recurrent vascular events: The FUTURE study. PLoS ONE. 2014. https://doi.org/10.1371/journal.pone.0087171 .

Lu Z, Xiong Y, Feng X, et al. Insulin resistance estimated by estimated glucose disposal rate predicts outcomes in acute ischemic stroke patients. Cardiovasc Diabetol. 2023;22(1):225. https://doi.org/10.1186/s12933-023-01925-1 .

Jin A, Wang S, Li J, et al. Mediation of systemic inflammation on insulin resistance and prognosis of nondiabetic patients with ischemic stroke. Stroke. 2023;54(3):759–69. https://doi.org/10.1161/STROKEAHA.122.039542 .

Yoo J, Jeon J, Baik M, Kim J. Lobeglitazone, a novel thiazolidinedione, for secondary prevention in patients with ischemic stroke: a nationwide nested case-control study. Cardiovasc Diabetol. 2023;22(1):106. https://doi.org/10.1186/s12933-023-01841-4 .

Woo MH, Lee HS, Kim J. Effect of pioglitazone in acute ischemic stroke patients with diabetes mellitus: a nested case-control study. Cardiovasc Diabetol. 2019;18(1):67. https://doi.org/10.1186/s12933-019-0874-5 .

Morgan CL, Inzucchi SE, Puelles J, Jenkins-Jones S, Currie CJ. Impact of treatment with pioglitazone on stroke outcomes: a real-world database analysis. Diabetes Obes Metab. 2018;20(9):2140–7. https://doi.org/10.1111/dom.13344 .

Tu WJ, Liu Z, Chao BH, et al. Metformin use is associated with low risk of case fatality and disability rates in first-ever stroke patients with type 2 diabetes. Ther Adv Chronic Dis. 2022;13:20406223221076896. https://doi.org/10.1177/20406223221076894 .

Chen DY, Wang SH, Mao CT, et al. Sitagliptin after ischemic stroke in type 2 diabetic patients: a nationwide cohort study. Medicine. 2015;94(28): e1128. https://doi.org/10.1097/MD.0000000000001128 .

Chen DY, Li YR, Mao CT, et al. Cardiovascular outcomes of vildagliptin in patients with type 2 diabetes mellitus after acute coronary syndrome or acute ischemic stroke. J Diabetes Investig. 2020;11(1):110–24. https://doi.org/10.1111/jdi.13078 .

Li YR, Tsai SS, Chen DY, et al. Linagliptin and cardiovascular outcomes in type 2 diabetes after acute coronary syndrome or acute ischemic stroke. Cardiovasc Diabetol. 2018;17(1):2. https://doi.org/10.1186/s12933-017-0655-y .

Favilla CG, Mullen MT, Ali M, Higgins P, Kasner SE. Sulfonylurea use before stroke does not influence outcome. Stroke. 2011;42(3):710–5. https://doi.org/10.1161/STROKEAHA.110.599274 .

Tsivgoulis G, Goyal N, Iftikhar S, et al. Sulfonylurea Pretreatment and In-Hospital Use Does Not Impact Acute Ischemic Strokes (AIS) Outcomes Following Intravenous Thrombolysis. J Stroke Cerebrovasc Dis. 2017;26(4):795–800. https://doi.org/10.1016/j.jstrokecerebrovasdis.2016.10.019 .

Horsdal HT, Mehnert F, Rungby J, Johnsen SP. Type of preadmission antidiabetic treatment and outcome among patients with ischemic stroke: a nationwide follow-up study. J Stroke Cerebrovasc Dis. 2012;21(8):717–25. https://doi.org/10.1016/j.jstrokecerebrovasdis.2011.03.007 .

Richter B, Hemmingsen B, Metzendorf MI, Takwoingi Y. Development of type 2 diabetes mellitus in people with intermediate hyperglycaemia. Cochrane Database Syst Rev. 2018;10(10): CD012661. https://doi.org/10.1002/14651858.CD012661.pub2 .

Cai X, Zhang Y, Li M, et al. Association between prediabetes and risk of all cause mortality and cardiovascular disease: updated meta-analysis. BMJ. 2020;370: m2297. https://doi.org/10.1136/bmj.m2297 .

Rentsch CT, Garfield V, Mathur R, et al. Sex-specific risks for cardiovascular disease across the glycaemic spectrum: a population-based cohort study using the UK Biobank. The Lancet Regional Health - Europe. 2023;32:1–14. https://doi.org/10.1016/j.lanepe.2023.100693 .

Huff TA, Lebovitz HE, Heyman A, Davis L. Serial changes in glucose utilization and insulin and growth hormone secretion in acute cerebrovascular disease. Stroke. 1972;3(5):543–52. https://doi.org/10.1161/01.STR.3.5.543 .

Cleeman JI. Executive summary of the third report of the National Cholesterol Education Program (NCEP) expert panel on detection, evaluation, and treatment of high blood cholesterol in adults (adult treatment panel III). J Am Med Assoc. 2001;285(19):2486–97. https://doi.org/10.1001/jama.285.19.2486 .

Kahn R, Buse J, Ferrannini E, Stern M. The metabolic syndrome: time for a critical appraisal: joint statement from the American Diabetes Association and the European Association for the Study of Diabetes. Diabetes Care. 2005;28(9):2289–304. https://doi.org/10.2337/diacare.28.9.2289 .

Echouffo-Tcheugui JB, Selvin E. Prediabetes and what it means: the epidemiological evidence. Annu Rev Public Health. 2020;42:59–77. https://doi.org/10.1146/annurev-publhealth-090419-102644 .

Lee M, Saver JL, Hong KS, Song S, Chang KH, Ovbiagele B. Effect of pre-diabetes on future risk of stroke: meta-analysis. BMJ. 2012;344: e3564. https://doi.org/10.1136/bmj.e3564 .

Gerstein HC, Miller ME, Byington RP, et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med. 2008;358(24):2545–59.

Marso SP, Bain SC, Consoli A, et al. Semaglutide and cardiovascular outcomes in patients with type 2 diabetes. N Engl J Med. 2016;375(19):1834–44. https://doi.org/10.1056/NEJMoa1607141 .

Zinman B, Wanner C, Lachin JM, et al. Empagliflozin, cardiovascular outcomes, and mortality in type 2 diabetes. N Engl J Med. 2015;373(22):2117–28. https://doi.org/10.1056/NEJMoa1504720 .

Neal B, Perkovic V, Mahaffey KW, et al. Canagliflozin and cardiovascular and renal events in type 2 diabetes. N Engl J Med. 2017;377(7):644–57. https://doi.org/10.1056/NEJMoa1611925 .

Dawson J, Béjot Y, Christensen LM, et al. European Stroke Organisation (ESO) guideline on pharmacological interventions for long-term secondary prevention after ischaemic stroke or transient ischaemic attack. Eur Stroke J. 2022;7(3):I–II. https://doi.org/10.1177/23969873221100032 .

Gerstein HC, Hart R, Colhoun HM, et al. The effect of dulaglutide on stroke: an exploratory analysis of the REWIND trial. Lancet Diabetes Endocrinol. 2020;8(2):106–14. https://doi.org/10.1016/S2213-8587(19)30423-1 .

Download references

This study was partially funded by the Corona Foundation. Protocol https://osf.io/jvyhw . The funder had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Center for Stroke Research Berlin (CSB), Charité– Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany

Nurcennet Kaynak, Valentin Kennel, Matthias Endres & Alexander H. Nave

Department of Neurology with Experimental Neurology, Charité– Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Charitéplatz 1, 10117, Berlin, Germany

Nurcennet Kaynak, Valentin Kennel, Torsten Rackoll, Matthias Endres & Alexander H. Nave

Berlin Institute of Health at Charité, Charité– Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany

Nurcennet Kaynak & Alexander H. Nave

German Center for Neurodegenerative Diseases (DZNE), partner site Berlin, Berlin, Germany

Nurcennet Kaynak & Matthias Endres

German Center for Cardiovascular Research (DZHK), partner site Berlin, Berlin, Germany

Nurcennet Kaynak, Matthias Endres & Alexander H. Nave

Berlin Institute of Health (BIH) QUEST Center for Responsible Research, Charité– Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany

Torsten Rackoll

Department of Biometry and Clinical Epidemiology, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin und Humboldt-Universität zu Berlin, Berlin, Germany

Daniel Schulze

German Center for Mental Health (DZPG), partner site Berlin, Berlin, Germany

Matthias Endres

You can also search for this author in PubMed   Google Scholar

Contributions

NK had full access to study data and is the guarantor of the study, taking full responsibility for the conduct of the study. NK, AHN, TR and ME conceived the study design and contributed to study protocol. NK, VK, and TR acquired data and performed the analysis. DS contributed to statistical methods and analyses. NK drafted the manuscript and all authors contributed to interpretation of the data and critical appraisal of the final work. AHN supervised the study. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Corresponding author

Correspondence to Alexander H. Nave .

Ethics declarations

Ethics approval.

Ethics approval was not required.

Consent for publication

Not applicable.

Competing interests

NK, VK, DS report no conflicts of interest. ME reports grants from Bayer and fees paid to the Charité from Amgen, AstraZeneca, Bayer Healthcare, Boehringer Ingelheim, BMS, Daiichi Sankyo, Sanofi, Pfizer, all outside the submitted work. AHN receives funding from the Corona foundation and the German Center for cardiovascular research (DZHK), no conflict of interest. TR receives funding from the European Commission, no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file1 (docx 3212 kb), rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kaynak, N., Kennel, V., Rackoll, T. et al. Impaired glucose metabolism and the risk of vascular events and mortality after ischemic stroke: A systematic review and meta-analysis. Cardiovasc Diabetol 23 , 323 (2024). https://doi.org/10.1186/s12933-024-02413-w

Download citation

Received : 30 June 2024

Accepted : 19 August 2024

Published : 31 August 2024

DOI : https://doi.org/10.1186/s12933-024-02413-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Prediabetes
  • Vascular events

Cardiovascular Diabetology

ISSN: 1475-2840

systematic review and systematic literature review

Development of an IoT occupant-centric metrics: A systematic literature review

  • September 2024
  • Results in Engineering 23(6):102614
  • 23(6):102614

Esraa A. Metwally at Ain Shams University

  • Ain Shams University

Mostafa Rifat Ismail at Ain Shams University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the authors.

  • ENERG BUILDINGS

Hana Begić Juričić

  • Hrvoje Krstić

Youssef Boutahri

  • Nadeen A. Altarawneh

Talib K. Murtadha

  • Mohamed M. Awad

Zhixing Li

  • Xiaoqing Zhu
  • INT J LIFE CYCLE ASS
  • Abdulrahman Fnais

Yacine Rezgui

  • Andrea Avignone
  • Tommaso Carluccio
  • Sunil Kumar Sharma
  • Swati Mohapatra

Rakesh Chandmal Sharma

  • Thompson Stephan
  • Shaizatulaqma Kamalul Ariffin
  • Mohamad Fakhrul Reza Abd Rahman
  • Ali Mughal Muhammad
  • ENERG POLICY

Zuhaib Batra

  • Senta Schmatzberger
  • Jonathan Volt
  • Jerzy Kwiatkowski
  • Yeonju Jang

Meng Kong

  • Rongpeng Zhang

Zheng O’Neill

  • Wei-Chen Cheng
  • Ting-Hung Chen
  • Nehal Elshaboury

M. Steinbuch

  • Int J Environ Res Publ Health
  • Educ Inform Tech

Maryam Nasser Al-Nuaimi

  • WIRELESS PERS COMMUN

Debajyoti Pal

  • Chonlameth Arpnikanondt

Rens van de Schoot

  • Jonathan de Bruin
  • Raoul Schram
  • Daniel L. Oberski

Paris Fokaides

  • Eerika Janhunen
  • Niina Leskinen

Seppo Junnila

  • Abdullah Alsehaimi

Moustafa Houda

  • Ahsan Waqar
  • Omrane Benjeddou
  • Aristotelis Ntafalias

Panos Papadopoulos

  • Stephen Wright

Serap Ulusam Seçkiner

  • Ovidiu Vermesan
  • Joël Bacquet

Hossein Omrany

  • Ruidong Chang
  • Veronica Soebarto

Jian Zuo

  • Chu-Le Chong
  • Cheng-Hock Yeoh
  • Hooi-Sian Choong
  • Ann Francis

Albert Thomas

  • Maisum Raza Devjani
  • BUILD ENVIRON

Abhijeet Ganesh Ghogare

  • Shobha Lata Sinha
  • Tikendra Nath Verma
  • Satish Kumar Dewangan
  • J Retailing Consum Serv

Catherine Prentice

  • Heap-Yih Chong

Philip Chie Hui Ling

  • INT J INFORM MANAGE
  • Steven Birkmeyer
  • Bernd W. Wirtz
  • Paul F. Langer

Alessandra Luna Navarro

  • Paul Fidler
  • Alistair Law

Mauro Overend

  • Claudio Del Pero
  • Niccolò Aste

Fabrizio Leonforte

  • William O'Brien

Farhang Tahmasebi

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

An overview of methodological approaches in systematic reviews

Prabhakar veginadu.

1 Department of Rural Clinical Sciences, La Trobe Rural Health School, La Trobe University, Bendigo Victoria, Australia

Hanny Calache

2 Lincoln International Institute for Rural Health, University of Lincoln, Brayford Pool, Lincoln UK

Akshaya Pandian

3 Department of Orthodontics, Saveetha Dental College, Chennai Tamil Nadu, India

Mohd Masood

Associated data.

APPENDIX B: List of excluded studies with detailed reasons for exclusion

APPENDIX C: Quality assessment of included reviews using AMSTAR 2

The aim of this overview is to identify and collate evidence from existing published systematic review (SR) articles evaluating various methodological approaches used at each stage of an SR.

The search was conducted in five electronic databases from inception to November 2020 and updated in February 2022: MEDLINE, Embase, Web of Science Core Collection, Cochrane Database of Systematic Reviews, and APA PsycINFO. Title and abstract screening were performed in two stages by one reviewer, supported by a second reviewer. Full‐text screening, data extraction, and quality appraisal were performed by two reviewers independently. The quality of the included SRs was assessed using the AMSTAR 2 checklist.

The search retrieved 41,556 unique citations, of which 9 SRs were deemed eligible for inclusion in final synthesis. Included SRs evaluated 24 unique methodological approaches used for defining the review scope and eligibility, literature search, screening, data extraction, and quality appraisal in the SR process. Limited evidence supports the following (a) searching multiple resources (electronic databases, handsearching, and reference lists) to identify relevant literature; (b) excluding non‐English, gray, and unpublished literature, and (c) use of text‐mining approaches during title and abstract screening.

The overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process, as well as some methodological modifications currently used in expedited SRs. Overall, findings of this overview highlight the dearth of published SRs focused on SR methodologies and this warrants future work in this area.

1. INTRODUCTION

Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the “gold standard” of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search, appraise, and synthesize the available evidence. 3 Several guidelines, developed by various organizations, are available for the conduct of an SR; 4 , 5 , 6 , 7 among these, Cochrane is considered a pioneer in developing rigorous and highly structured methodology for the conduct of SRs. 8 The guidelines developed by these organizations outline seven fundamental steps required in SR process: defining the scope of the review and eligibility criteria, literature searching and retrieval, selecting eligible studies, extracting relevant data, assessing risk of bias (RoB) in included studies, synthesizing results, and assessing certainty of evidence (CoE) and presenting findings. 4 , 5 , 6 , 7

The methodological rigor involved in an SR can require a significant amount of time and resource, which may not always be available. 9 As a result, there has been a proliferation of modifications made to the traditional SR process, such as refining, shortening, bypassing, or omitting one or more steps, 10 , 11 for example, limits on the number and type of databases searched, limits on publication date, language, and types of studies included, and limiting to one reviewer for screening and selection of studies, as opposed to two or more reviewers. 10 , 11 These methodological modifications are made to accommodate the needs of and resource constraints of the reviewers and stakeholders (e.g., organizations, policymakers, health care professionals, and other knowledge users). While such modifications are considered time and resource efficient, they may introduce bias in the review process reducing their usefulness. 5

Substantial research has been conducted examining various approaches used in the standardized SR methodology and their impact on the validity of SR results. There are a number of published reviews examining the approaches or modifications corresponding to single 12 , 13 or multiple steps 14 involved in an SR. However, there is yet to be a comprehensive summary of the SR‐level evidence for all the seven fundamental steps in an SR. Such a holistic evidence synthesis will provide an empirical basis to confirm the validity of current accepted practices in the conduct of SRs. Furthermore, sometimes there is a balance that needs to be achieved between the resource availability and the need to synthesize the evidence in the best way possible, given the constraints. This evidence base will also inform the choice of modifications to be made to the SR methods, as well as the potential impact of these modifications on the SR results. An overview is considered the choice of approach for summarizing existing evidence on a broad topic, directing the reader to evidence, or highlighting the gaps in evidence, where the evidence is derived exclusively from SRs. 15 Therefore, for this review, an overview approach was used to (a) identify and collate evidence from existing published SR articles evaluating various methodological approaches employed in each of the seven fundamental steps of an SR and (b) highlight both the gaps in the current research and the potential areas for future research on the methods employed in SRs.

An a priori protocol was developed for this overview but was not registered with the International Prospective Register of Systematic Reviews (PROSPERO), as the review was primarily methodological in nature and did not meet PROSPERO eligibility criteria for registration. The protocol is available from the corresponding author upon reasonable request. This overview was conducted based on the guidelines for the conduct of overviews as outlined in The Cochrane Handbook. 15 Reporting followed the Preferred Reporting Items for Systematic reviews and Meta‐analyses (PRISMA) statement. 3

2.1. Eligibility criteria

Only published SRs, with or without associated MA, were included in this overview. We adopted the defining characteristics of SRs from The Cochrane Handbook. 5 According to The Cochrane Handbook, a review was considered systematic if it satisfied the following criteria: (a) clearly states the objectives and eligibility criteria for study inclusion; (b) provides reproducible methodology; (c) includes a systematic search to identify all eligible studies; (d) reports assessment of validity of findings of included studies (e.g., RoB assessment of the included studies); (e) systematically presents all the characteristics or findings of the included studies. 5 Reviews that did not meet all of the above criteria were not considered a SR for this study and were excluded. MA‐only articles were included if it was mentioned that the MA was based on an SR.

SRs and/or MA of primary studies evaluating methodological approaches used in defining review scope and study eligibility, literature search, study selection, data extraction, RoB assessment, data synthesis, and CoE assessment and reporting were included. The methodological approaches examined in these SRs and/or MA can also be related to the substeps or elements of these steps; for example, applying limits on date or type of publication are the elements of literature search. Included SRs examined or compared various aspects of a method or methods, and the associated factors, including but not limited to: precision or effectiveness; accuracy or reliability; impact on the SR and/or MA results; reproducibility of an SR steps or bias occurred; time and/or resource efficiency. SRs assessing the methodological quality of SRs (e.g., adherence to reporting guidelines), evaluating techniques for building search strategies or the use of specific database filters (e.g., use of Boolean operators or search filters for randomized controlled trials), examining various tools used for RoB or CoE assessment (e.g., ROBINS vs. Cochrane RoB tool), or evaluating statistical techniques used in meta‐analyses were excluded. 14

2.2. Search

The search for published SRs was performed on the following scientific databases initially from inception to third week of November 2020 and updated in the last week of February 2022: MEDLINE (via Ovid), Embase (via Ovid), Web of Science Core Collection, Cochrane Database of Systematic Reviews, and American Psychological Association (APA) PsycINFO. Search was restricted to English language publications. Following the objectives of this study, study design filters within databases were used to restrict the search to SRs and MA, where available. The reference lists of included SRs were also searched for potentially relevant publications.

The search terms included keywords, truncations, and subject headings for the key concepts in the review question: SRs and/or MA, methods, and evaluation. Some of the terms were adopted from the search strategy used in a previous review by Robson et al., which reviewed primary studies on methodological approaches used in study selection, data extraction, and quality appraisal steps of SR process. 14 Individual search strategies were developed for respective databases by combining the search terms using appropriate proximity and Boolean operators, along with the related subject headings in order to identify SRs and/or MA. 16 , 17 A senior librarian was consulted in the design of the search terms and strategy. Appendix A presents the detailed search strategies for all five databases.

2.3. Study selection and data extraction

Title and abstract screening of references were performed in three steps. First, one reviewer (PV) screened all the titles and excluded obviously irrelevant citations, for example, articles on topics not related to SRs, non‐SR publications (such as randomized controlled trials, observational studies, scoping reviews, etc.). Next, from the remaining citations, a random sample of 200 titles and abstracts were screened against the predefined eligibility criteria by two reviewers (PV and MM), independently, in duplicate. Discrepancies were discussed and resolved by consensus. This step ensured that the responses of the two reviewers were calibrated for consistency in the application of the eligibility criteria in the screening process. Finally, all the remaining titles and abstracts were reviewed by a single “calibrated” reviewer (PV) to identify potential full‐text records. Full‐text screening was performed by at least two authors independently (PV screened all the records, and duplicate assessment was conducted by MM, HC, or MG), with discrepancies resolved via discussions or by consulting a third reviewer.

Data related to review characteristics, results, key findings, and conclusions were extracted by at least two reviewers independently (PV performed data extraction for all the reviews and duplicate extraction was performed by AP, HC, or MG).

2.4. Quality assessment of included reviews

The quality assessment of the included SRs was performed using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). The tool consists of a 16‐item checklist addressing critical and noncritical domains. 18 For the purpose of this study, the domain related to MA was reclassified from critical to noncritical, as SRs with and without MA were included. The other six critical domains were used according to the tool guidelines. 18 Two reviewers (PV and AP) independently responded to each of the 16 items in the checklist with either “yes,” “partial yes,” or “no.” Based on the interpretations of the critical and noncritical domains, the overall quality of the review was rated as high, moderate, low, or critically low. 18 Disagreements were resolved through discussion or by consulting a third reviewer.

2.5. Data synthesis

To provide an understandable summary of existing evidence syntheses, characteristics of the methods evaluated in the included SRs were examined and key findings were categorized and presented based on the corresponding step in the SR process. The categories of key elements within each step were discussed and agreed by the authors. Results of the included reviews were tabulated and summarized descriptively, along with a discussion on any overlap in the primary studies. 15 No quantitative analyses of the data were performed.

From 41,556 unique citations identified through literature search, 50 full‐text records were reviewed, and nine systematic reviews 14 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 were deemed eligible for inclusion. The flow of studies through the screening process is presented in Figure  1 . A list of excluded studies with reasons can be found in Appendix B .

An external file that holds a picture, illustration, etc.
Object name is JEBM-15-39-g001.jpg

Study selection flowchart

3.1. Characteristics of included reviews

Table  1 summarizes the characteristics of included SRs. The majority of the included reviews (six of nine) were published after 2010. 14 , 22 , 23 , 24 , 25 , 26 Four of the nine included SRs were Cochrane reviews. 20 , 21 , 22 , 23 The number of databases searched in the reviews ranged from 2 to 14, 2 reviews searched gray literature sources, 24 , 25 and 7 reviews included a supplementary search strategy to identify relevant literature. 14 , 19 , 20 , 21 , 22 , 23 , 26 Three of the included SRs (all Cochrane reviews) included an integrated MA. 20 , 21 , 23

Characteristics of included studies

Author, yearSearch strategy (year last searched; no. databases; supplementary searches)SR design (type of review; no. of studies included)Topic; subject areaSR objectivesSR authors’ comments on study quality
Crumley, 2005 2004; Seven databases; four journals handsearched, reference lists and contacting authorsSR;  = 64RCTs and CCTs; not specifiedTo identify and quantitatively review studies comparing two or more different resources (e.g., databases, Internet, handsearching) used to identify RCTs and CCTs for systematic reviews.Most of the studies adequately described reproducible search methods, expected search yield. Poor quality in studies was mainly due to lack of rigor in reporting selection methodology. Majority of the studies did not indicate the number of people involved in independently screening the searches or applying eligibility criteria to identify potentially relevant studies.
Hopewell, 2007 2002; eight databases; selected journals and published abstracts handsearched, and contacting authorsSR and MA;  = 34 (34 in quantitative analysis)RCTs; health careTo review systematically empirical studies, which have compared the results of handsearching with the results of searching one or more electronic databases to identify reports of randomized trials.The electronic search was designed and carried out appropriately in majority of the studies, while the appropriateness of handsearching was unclear in half the studies because of limited information. The screening studies methods used in both groups were comparable in most of the studies.
Hopewell, 2007 2005; two databases; selected journals and published abstracts handsearched, reference lists, citations and contacting authorsSR and MA;  = 5 (5 in quantitative analysis)RCTs; health careTo review systematically research studies, which have investigated the impact of gray literature in meta‐analyses of randomized trials of health care interventions.In majority of the studies, electronic searches were designed and conducted appropriately, and the selection of studies for eligibility was similar for handsearching and database searching. Insufficient data for most studies to assess the appropriateness of handsearching and investigator agreeability on the eligibility of the trial reports.
Horsley, 2011 2008; three databases; reference lists, citations and contacting authorsSR;  = 12Any topic or study areaTo investigate the effectiveness of checking reference lists for the identification of additional, relevant studies for systematic reviews. Effectiveness is defined as the proportion of relevant studies identified by review authors solely by checking reference lists.Interpretability and generalizability of included studies was difficult. Extensive heterogeneity among the studies in the number and type of databases used. Lack of control in majority of the studies related to the quality and comprehensiveness of searching.
Morrison, 2012 2011; six databases and gray literatureSR;  = 5RCTs; conventional medicineTo examine the impact of English language restriction on systematic review‐based meta‐analysesThe included studies were assessed to have good reporting quality and validity of results. Methodological issues were mainly noted in the areas of sample power calculation and distribution of confounders.
Robson, 2019 2016; three databases; reference lists and contacting authorsSR;  = 37N/RTo identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews.The quality of the included studies was generally low. Only one study was assessed as having low RoB across all four domains. Majority of the studies were assessed to having unclear RoB across one or more domains.
Schmucker, 2017 2016; four databases; reference listsSR;  = 10Study data; medicineTo assess whether the inclusion of data that were not published at all and/or published only in the gray literature influences pooled effect estimates in meta‐analyses and leads to different interpretation.Majority of the included studies could not be judged on the adequacy of matching or adjusting for confounders of the gray/unpublished data in comparison to published data.
Also, generalizability of results was low or unclear in four research projects
Morissette, 2011 2009; five databases; reference lists and contacting authorsSR and MA;  = 6 (5 included in quantitative analysis)N/RTo determine whether blinded versus unblinded assessments of risk of bias result in similar or systematically different assessments in studies included in a systematic review.Four studies had unclear risk of bias, while two studies had high risk of bias.
O'Mara‐Eves, 2015 2013; 14 databases and gray literatureSR;  = 44N/RTo gather and present the available research evidence on existing methods for text mining related to the title and abstract screening stage in a systematic review, including the performance metrics used to evaluate these technologies.Quality appraised based on two criteria‐sampling of test cases and adequacy of methods description for replication. No study was excluded based on the quality (author contact).

SR = systematic review; MA = meta‐analysis; RCT = randomized controlled trial; CCT = controlled clinical trial; N/R = not reported.

The included SRs evaluated 24 unique methodological approaches (26 in total) used across five steps in the SR process; 8 SRs evaluated 6 approaches, 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 while 1 review evaluated 18 approaches. 14 Exclusion of gray or unpublished literature 21 , 26 and blinding of reviewers for RoB assessment 14 , 23 were evaluated in two reviews each. Included SRs evaluated methods used in five different steps in the SR process, including methods used in defining the scope of review ( n  = 3), literature search ( n  = 3), study selection ( n  = 2), data extraction ( n  = 1), and RoB assessment ( n  = 2) (Table  2 ).

Summary of findings from review evaluating systematic review methods

Key elementsAuthor, yearMethod assessedEvaluations/outcomes (P—primary; S—secondary)Summary of SR authors’ conclusionsQuality of review
Excluding study data based on publication statusHopewell, 2007 Gray vs. published literaturePooled effect estimatePublished trials are usually larger and show an overall greater treatment effect than gray trials. Excluding trials reported in gray literature from SRs and MAs may exaggerate the results.Moderate
Schmucker, 2017 Gray and/or unpublished vs. published literatureP: Pooled effect estimateExcluding unpublished trials had no or only a small effect on the pooled estimates of treatment effects. Insufficient evidence to conclude the impact of including unpublished or gray study data on MA conclusions.Moderate
S: Impact on interpretation of MA
Excluding study data based on language of publicationMorrison, 2012 English language vs. non‐English language publicationsP: Bias in summary treatment effectsNo evidence of a systematic bias from the use of English language restrictions in systematic review‐based meta‐analyses in conventional medicine. Conflicting results on the methodological and reporting quality of English and non‐English language RCTs. Further research required.Low
S: number of included studies and patients, methodological quality and statistical heterogeneity
Resources searchingCrumley, 2005 Two or more resources searching vs. resource‐specific searchingRecall and precisionMultiple‐source comprehensive searches are necessary to identify all RCTs for a systematic review. For electronic databases, using the Cochrane HSS or complex search strategy in consultation with a librarian is recommended.Critically low
Supplementary searchingHopewell, 2007 Handsearching only vs. one or more electronic database(s) searchingNumber of identified randomized trialsHandsearching is important for identifying trial reports for inclusion in systematic reviews of health care interventions published in nonindexed journals. Where time and resources are limited, majority of the full English‐language trial reports can be identified using a complex search or the Cochrane HSS.Moderate
Horsley, 2011 Checking reference list (no comparison)P: additional yield of checking reference listsThere is some evidence to support the use of checking reference lists to complement literature search in systematic reviews.Low
S: additional yield by publication type, study design or both and data pertaining to costs
Reviewer characteristicsRobson, 2019 Single vs. double reviewer screeningP: Accuracy, reliability, or efficiency of a methodUsing two reviewers for screening is recommended. If resources are limited, one reviewer can screen, and other reviewer can verify the list of excluded studies.Low
S: factors affecting accuracy or reliability of a method
Experienced vs. inexperienced reviewers for screeningScreening must be performed by experienced reviewers
Screening by blinded vs. unblinded reviewersAuthors do not recommend blinding of reviewers during screening as the blinding process was time‐consuming and had little impact on the results of MA
Use of technology for study selectionRobson, 2019 Use of dual computer monitors vs. nonuse of dual monitors for screeningP: Accuracy, reliability, or efficiency of a methodThere are no significant differences in the time spent on abstract or full‐text screening with the use and nonuse of dual monitorsLow
S: factors affecting accuracy or reliability of a method
Use of Google translate to translate non‐English citations to facilitate screeningUse of Google translate to screen German language citations
O'Mara‐Eves, 2015 Use of text mining for title and abstract screeningAny evaluation concerning workload reductionText mining approaches can be used to reduce the number of studies to be screened, increase the rate of screening, improve the workflow with screening prioritization, and replace the second reviewer. The evaluated approaches reported saving a workload of between 30% and 70%Critically low
Order of screeningRobson, 2019 Title‐first screening vs. title‐and‐abstract simultaneous screeningP: Accuracy, reliability, or efficiency of a methodTitle‐first screening showed no substantial gain in time when compared to simultaneous title and abstract screening.Low
S: factors affecting accuracy or reliability of a method
Reviewer characteristicsRobson, 2019 Single vs. double reviewer data extractionP: Accuracy, reliability, or efficiency of a methodUse two reviewers for data extraction. Single reviewer data extraction followed by the verification of outcome data by a second reviewer (where statistical analysis is planned), if resources precludeLow
S: factors affecting accuracy or reliability of a method
Experienced vs. inexperienced reviewers for data extractionExperienced reviewers must be used for extracting continuous outcomes data
Data extraction by blinded vs. unblinded reviewersAuthors do not recommend blinding of reviewers during data extraction as it had no impact on the results of MA
Use of technology for data extractionUse of dual computer monitors vs. nonuse of dual monitors for data extractionUsing two computer monitors may improve the efficiency of data extraction
Data extraction by two English reviewers using Google translate vs. data extraction by two reviewers fluent in respective languagesGoogle translate provides limited accuracy for data extraction
Computer‐assisted vs. double reviewer extraction of graphical dataUse of computer‐assisted programs to extract graphical data
Obtaining additional dataContacting study authors for additional dataRecommend contacting authors for obtaining additional relevant data
Reviewer characteristicsRobson, 2019 Quality appraisal by blinded vs. unblinded reviewersP: Accuracy, reliability, or efficiency of a methodInconsistent results on RoB assessments performed by blinded and unblinded reviewers. Blinding reviewers for quality appraisal not recommendedLow
S: factors affecting accuracy or reliability of a method
Morissette, 2011 Risk of bias (RoB) assessment by blinded vs. unblinded reviewersP: Mean difference and 95% confidence interval between RoB assessment scoresFindings related to the difference between blinded and unblinded RoB assessments are inconsistent from the studies. Pooled effects show no differences in RoB assessments for assessments completed in a blinded or unblinded manner.Moderate
S: qualitative level of agreement, mean RoB scores and measures of variance for the results of the RoB assessments, and inter‐rater reliability between blinded and unblinded reviewers
Robson, 2019 Experienced vs. inexperienced reviewers for quality appraisalP: Accuracy, reliability, or efficiency of a methodReviewers performing quality appraisal must be trained. Quality assessment tool must be pilot tested.Low
S: factors affecting accuracy or reliability of a method
Use of additional guidance vs. nonuse of additional guidance for quality appraisalProviding guidance and decision rules for quality appraisal improved the inter‐rater reliability in RoB assessments.
Obtaining additional dataContacting study authors for obtaining additional information/use of supplementary information available in the published trials vs. no additional information for quality appraisalAdditional data related to study quality obtained by contacting study authors improved the quality assessment.
RoB assessment of qualitative studiesStructured vs. unstructured appraisal of qualitative research studiesUse of structured tool if qualitative and quantitative studies designs are included in the review. For qualitative reviews, either structured or unstructured quality appraisal tool can be used.

There was some overlap in the primary studies evaluated in the included SRs on the same topics: Schmucker et al. 26 and Hopewell et al. 21 ( n  = 4), Hopewell et al. 20 and Crumley et al. 19 ( n  = 30), and Robson et al. 14 and Morissette et al. 23 ( n  = 4). There were no conflicting results between any of the identified SRs on the same topic.

3.2. Methodological quality of included reviews

Overall, the quality of the included reviews was assessed as moderate at best (Table  2 ). The most common critical weakness in the reviews was failure to provide justification for excluding individual studies (four reviews). Detailed quality assessment is provided in Appendix C .

3.3. Evidence on systematic review methods

3.3.1. methods for defining review scope and eligibility.

Two SRs investigated the effect of excluding data obtained from gray or unpublished sources on the pooled effect estimates of MA. 21 , 26 Hopewell et al. 21 reviewed five studies that compared the impact of gray literature on the results of a cohort of MA of RCTs in health care interventions. Gray literature was defined as information published in “print or electronic sources not controlled by commercial or academic publishers.” Findings showed an overall greater treatment effect for published trials than trials reported in gray literature. In a more recent review, Schmucker et al. 26 addressed similar objectives, by investigating gray and unpublished data in medicine. In addition to gray literature, defined similar to the previous review by Hopewell et al., the authors also evaluated unpublished data—defined as “supplemental unpublished data related to published trials, data obtained from the Food and Drug Administration  or other regulatory websites or postmarketing analyses hidden from the public.” The review found that in majority of the MA, excluding gray literature had little or no effect on the pooled effect estimates. The evidence was limited to conclude if the data from gray and unpublished literature had an impact on the conclusions of MA. 26

Morrison et al. 24 examined five studies measuring the effect of excluding non‐English language RCTs on the summary treatment effects of SR‐based MA in various fields of conventional medicine. Although none of the included studies reported major difference in the treatment effect estimates between English only and non‐English inclusive MA, the review found inconsistent evidence regarding the methodological and reporting quality of English and non‐English trials. 24 As such, there might be a risk of introducing “language bias” when excluding non‐English language RCTs. The authors also noted that the numbers of non‐English trials vary across medical specialties, as does the impact of these trials on MA results. Based on these findings, Morrison et al. 24 conclude that literature searches must include non‐English studies when resources and time are available to minimize the risk of introducing “language bias.”

3.3.2. Methods for searching studies

Crumley et al. 19 analyzed recall (also referred to as “sensitivity” by some researchers; defined as “percentage of relevant studies identified by the search”) and precision (defined as “percentage of studies identified by the search that were relevant”) when searching a single resource to identify randomized controlled trials and controlled clinical trials, as opposed to searching multiple resources. The studies included in their review frequently compared a MEDLINE only search with the search involving a combination of other resources. The review found low median recall estimates (median values between 24% and 92%) and very low median precisions (median values between 0% and 49%) for most of the electronic databases when searched singularly. 19 A between‐database comparison, based on the type of search strategy used, showed better recall and precision for complex and Cochrane Highly Sensitive search strategies (CHSSS). In conclusion, the authors emphasize that literature searches for trials in SRs must include multiple sources. 19

In an SR comparing handsearching and electronic database searching, Hopewell et al. 20 found that handsearching retrieved more relevant RCTs (retrieval rate of 92%−100%) than searching in a single electronic database (retrieval rates of 67% for PsycINFO/PsycLIT, 55% for MEDLINE, and 49% for Embase). The retrieval rates varied depending on the quality of handsearching, type of electronic search strategy used (e.g., simple, complex or CHSSS), and type of trial reports searched (e.g., full reports, conference abstracts, etc.). The authors concluded that handsearching was particularly important in identifying full trials published in nonindexed journals and in languages other than English, as well as those published as abstracts and letters. 20

The effectiveness of checking reference lists to retrieve additional relevant studies for an SR was investigated by Horsley et al. 22 The review reported that checking reference lists yielded 2.5%–40% more studies depending on the quality and comprehensiveness of the electronic search used. The authors conclude that there is some evidence, although from poor quality studies, to support use of checking reference lists to supplement database searching. 22

3.3.3. Methods for selecting studies

Three approaches relevant to reviewer characteristics, including number, experience, and blinding of reviewers involved in the screening process were highlighted in an SR by Robson et al. 14 Based on the retrieved evidence, the authors recommended that two independent, experienced, and unblinded reviewers be involved in study selection. 14 A modified approach has also been suggested by the review authors, where one reviewer screens and the other reviewer verifies the list of excluded studies, when the resources are limited. It should be noted however this suggestion is likely based on the authors’ opinion, as there was no evidence related to this from the studies included in the review.

Robson et al. 14 also reported two methods describing the use of technology for screening studies: use of Google Translate for translating languages (for example, German language articles to English) to facilitate screening was considered a viable method, while using two computer monitors for screening did not increase the screening efficiency in SR. Title‐first screening was found to be more efficient than simultaneous screening of titles and abstracts, although the gain in time with the former method was lesser than the latter. Therefore, considering that the search results are routinely exported as titles and abstracts, Robson et al. 14 recommend screening titles and abstracts simultaneously. However, the authors note that these conclusions were based on very limited number (in most instances one study per method) of low‐quality studies. 14

3.3.4. Methods for data extraction

Robson et al. 14 examined three approaches for data extraction relevant to reviewer characteristics, including number, experience, and blinding of reviewers (similar to the study selection step). Although based on limited evidence from a small number of studies, the authors recommended use of two experienced and unblinded reviewers for data extraction. The experience of the reviewers was suggested to be especially important when extracting continuous outcomes (or quantitative) data. However, when the resources are limited, data extraction by one reviewer and a verification of the outcomes data by a second reviewer was recommended.

As for the methods involving use of technology, Robson et al. 14 identified limited evidence on the use of two monitors to improve the data extraction efficiency and computer‐assisted programs for graphical data extraction. However, use of Google Translate for data extraction in non‐English articles was not considered to be viable. 14 In the same review, Robson et al. 14 identified evidence supporting contacting authors for obtaining additional relevant data.

3.3.5. Methods for RoB assessment

Two SRs examined the impact of blinding of reviewers for RoB assessments. 14 , 23 Morissette et al. 23 investigated the mean differences between the blinded and unblinded RoB assessment scores and found inconsistent differences among the included studies providing no definitive conclusions. Similar conclusions were drawn in a more recent review by Robson et al., 14 which included four studies on reviewer blinding for RoB assessment that completely overlapped with Morissette et al. 23

Use of experienced reviewers and provision of additional guidance for RoB assessment were examined by Robson et al. 14 The review concluded that providing intensive training and guidance on assessing studies reporting insufficient data to the reviewers improves RoB assessments. 14 Obtaining additional data related to quality assessment by contacting study authors was also found to help the RoB assessments, although based on limited evidence. When assessing the qualitative or mixed method reviews, Robson et al. 14 recommends the use of a structured RoB tool as opposed to an unstructured tool. No SRs were identified on data synthesis and CoE assessment and reporting steps.

4. DISCUSSION

4.1. summary of findings.

Nine SRs examining 24 unique methods used across five steps in the SR process were identified in this overview. The collective evidence supports some current traditional and modified SR practices, while challenging other approaches. However, the quality of the included reviews was assessed to be moderate at best and in the majority of the included SRs, evidence related to the evaluated methods was obtained from very limited numbers of primary studies. As such, the interpretations from these SRs should be made cautiously.

The evidence gathered from the included SRs corroborate a few current SR approaches. 5 For example, it is important to search multiple resources for identifying relevant trials (RCTs and/or CCTs). The resources must include a combination of electronic database searching, handsearching, and reference lists of retrieved articles. 5 However, no SRs have been identified that evaluated the impact of the number of electronic databases searched. A recent study by Halladay et al. 27 found that articles on therapeutic intervention, retrieved by searching databases other than PubMed (including Embase), contributed only a small amount of information to the MA and also had a minimal impact on the MA results. The authors concluded that when the resources are limited and when large number of studies are expected to be retrieved for the SR or MA, PubMed‐only search can yield reliable results. 27

Findings from the included SRs also reiterate some methodological modifications currently employed to “expedite” the SR process. 10 , 11 For example, excluding non‐English language trials and gray/unpublished trials from MA have been shown to have minimal or no impact on the results of MA. 24 , 26 However, the efficiency of these SR methods, in terms of time and the resources used, have not been evaluated in the included SRs. 24 , 26 Of the SRs included, only two have focused on the aspect of efficiency 14 , 25 ; O'Mara‐Eves et al. 25 report some evidence to support the use of text‐mining approaches for title and abstract screening in order to increase the rate of screening. Moreover, only one included SR 14 considered primary studies that evaluated reliability (inter‐ or intra‐reviewer consistency) and accuracy (validity when compared against a “gold standard” method) of the SR methods. This can be attributed to the limited number of primary studies that evaluated these outcomes when evaluating the SR methods. 14 Lack of outcome measures related to reliability, accuracy, and efficiency precludes making definitive recommendations on the use of these methods/modifications. Future research studies must focus on these outcomes.

Some evaluated methods may be relevant to multiple steps; for example, exclusions based on publication status (gray/unpublished literature) and language of publication (non‐English language studies) can be outlined in the a priori eligibility criteria or can be incorporated as search limits in the search strategy. SRs included in this overview focused on the effect of study exclusions on pooled treatment effect estimates or MA conclusions. Excluding studies from the search results, after conducting a comprehensive search, based on different eligibility criteria may yield different results when compared to the results obtained when limiting the search itself. 28 Further studies are required to examine this aspect.

Although we acknowledge the lack of standardized quality assessment tools for methodological study designs, we adhered to the Cochrane criteria for identifying SRs in this overview. This was done to ensure consistency in the quality of the included evidence. As a result, we excluded three reviews that did not provide any form of discussion on the quality of the included studies. The methods investigated in these reviews concern supplementary search, 29 data extraction, 12 and screening. 13 However, methods reported in two of these three reviews, by Mathes et al. 12 and Waffenschmidt et al., 13 have also been examined in the SR by Robson et al., 14 which was included in this overview; in most instances (with the exception of one study included in Mathes et al. 12 and Waffenschmidt et al. 13 each), the studies examined in these excluded reviews overlapped with those in the SR by Robson et al. 14

One of the key gaps in the knowledge observed in this overview was the dearth of SRs on the methods used in the data synthesis component of SR. Narrative and quantitative syntheses are the two most commonly used approaches for synthesizing data in evidence synthesis. 5 There are some published studies on the proposed indications and implications of these two approaches. 30 , 31 These studies found that both data synthesis methods produced comparable results and have their own advantages, suggesting that the choice of the method must be based on the purpose of the review. 31 With increasing number of “expedited” SR approaches (so called “rapid reviews”) avoiding MA, 10 , 11 further research studies are warranted in this area to determine the impact of the type of data synthesis on the results of the SR.

4.2. Implications for future research

The findings of this overview highlight several areas of paucity in primary research and evidence synthesis on SR methods. First, no SRs were identified on methods used in two important components of the SR process, including data synthesis and CoE and reporting. As for the included SRs, a limited number of evaluation studies have been identified for several methods. This indicates that further research is required to corroborate many of the methods recommended in current SR guidelines. 4 , 5 , 6 , 7 Second, some SRs evaluated the impact of methods on the results of quantitative synthesis and MA conclusions. Future research studies must also focus on the interpretations of SR results. 28 , 32 Finally, most of the included SRs were conducted on specific topics related to the field of health care, limiting the generalizability of the findings to other areas. It is important that future research studies evaluating evidence syntheses broaden the objectives and include studies on different topics within the field of health care.

4.3. Strengths and limitations

To our knowledge, this is the first overview summarizing current evidence from SRs and MA on different methodological approaches used in several fundamental steps in SR conduct. The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs.

There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies. Also, the majority of the included SRs may be considered outdated as they were published (or last updated) more than 5 years ago 33 ; only three of the nine SRs have been published in the last 5 years. 14 , 25 , 26 Therefore, important and recent evidence related to these topics may not have been included. Substantial numbers of included SRs were conducted in the field of health, which may limit the generalizability of the findings. Some method evaluations in the included SRs focused on quantitative analyses components and MA conclusions only. As such, the applicability of these findings to SR more broadly is still unclear. 28 Considering the methodological nature of our overview, limiting the inclusion of SRs according to the Cochrane criteria might have resulted in missing some relevant evidence from those reviews without a quality assessment component. 12 , 13 , 29 Although the included SRs performed some form of quality appraisal of the included studies, most of them did not use a standardized RoB tool, which may impact the confidence in their conclusions. Due to the type of outcome measures used for the method evaluations in the primary studies and the included SRs, some of the identified methods have not been validated against a reference standard.

Some limitations in the overview process must be noted. While our literature search was exhaustive covering five bibliographic databases and supplementary search of reference lists, no gray sources or other evidence resources were searched. Also, the search was primarily conducted in health databases, which might have resulted in missing SRs published in other fields. Moreover, only English language SRs were included for feasibility. As the literature search retrieved large number of citations (i.e., 41,556), the title and abstract screening was performed by a single reviewer, calibrated for consistency in the screening process by another reviewer, owing to time and resource limitations. These might have potentially resulted in some errors when retrieving and selecting relevant SRs. The SR methods were grouped based on key elements of each recommended SR step, as agreed by the authors. This categorization pertains to the identified set of methods and should be considered subjective.

5. CONCLUSIONS

This overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process. Limited evidence was also identified on some methodological modifications currently used to expedite the SR process. Overall, findings highlight the dearth of SRs on SR methodologies, warranting further work to confirm several current recommendations on conventional and expedited SR processes.

CONFLICT OF INTEREST

The authors declare no conflicts of interest.

Supporting information

APPENDIX A: Detailed search strategies

ACKNOWLEDGMENTS

The first author is supported by a La Trobe University Full Fee Research Scholarship and a Graduate Research Scholarship.

Open Access Funding provided by La Trobe University.

Veginadu P, Calache H, Gussy M, Pandian A, Masood M. An overview of methodological approaches in systematic reviews . J Evid Based Med . 2022; 15 :39–54. 10.1111/jebm.12468 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. How to Conduct a Systematic Review

    systematic review and systematic literature review

  2. systematic literature review results section

    systematic review and systematic literature review

  3. Systematic reviews

    systematic review and systematic literature review

  4. Process of the systematic literature review

    systematic review and systematic literature review

  5. Systematic Literature Review Methodology

    systematic review and systematic literature review

  6. The methodology of the systematic literature review. Four phases of the

    systematic review and systematic literature review

VIDEO

  1. Systematic review

  2. Research Methodology: Philosophically Explained!

  3. Systematic Literature Review Paper

  4. Introduction to Literature Review, Systematic Review, and Meta-analysis

  5. A Comprehensive Systematic Literature Review on Intrusion Detection Systems

  6. Differences Between Systematic Review and Scoping Review

COMMENTS

  1. Systematic reviews: Structure, form and content

    A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.

  2. Guidelines for writing a systematic review

    A Systematic Review (SR) is a synthesis of evidence that is identified and critically appraised to understand a specific topic. SRs are more comprehensive than a Literature Review, which most academics will be familiar with, as they follow a methodical process to identify and analyse existing literature (Cochrane, 2022).

  3. Systematic Literature Review or Literature Review

    Systematic Literature Review or Literature Review

  4. Systematic reviews: Structure, form and content

    A systematic review collects secondary data, and is a synthesis of all available, relevant evidence which brings together all existing primary studies for review (Cochrane 2016). A systematic review differs from other types of literature review in several major ways.

  5. Systematic Reviews and Meta-Analysis: A Guide for Beginners

    Systematic reviews involve the application of scientific methods to reduce bias in review of literature. The key components of a systematic review are a well-defined research question, comprehensive literature search to identify all studies that potentially address the question, systematic assembly of the studies that answer the question, critical appraisal of the methodological quality of the ...

  6. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  7. How-to conduct a systematic literature review: A quick guide for

    Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure .An SLR updates the reader with current literature about a subject .The goal is to review critical points of current knowledge on a topic about research ...

  8. Introduction to Systematic Reviews

    A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question (Lasserson et al. 2019; IOM 2011).What sets a systematic review apart from a narrative review is that it follows consistent, rigorous, and transparent methods established in a protocol in order to minimize bias and errors.

  9. Guidance on Conducting a Systematic Literature Review

    Guidance on Conducting a Systematic Literature Review

  10. How to write a systematic literature review [9 steps]

    How to write a systematic literature review [9 steps]

  11. Systematic Review

    Systematic Review | Definition, Example & Guide

  12. How to Do a Systematic Review: A Best Practice Guide ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  13. Research Guides: Systematic Reviews: Types of Literature Reviews

    Types of Literature Reviews - Systematic Reviews

  14. Library Guides: Nursing: Systematic Review vs. Literature Review

    Systematic Review vs. Literature Review - Nursing

  15. Conducting systematic literature reviews and bibliometric analyses

    The rationale for systematic literature reviews has been well established in some fields such as medicine for decades (e.g. Mulrow, 1994); however, there are still few methodological guidelines available in the management sciences on how to assemble and structure such reviews (for exceptions, see Denyer and Tranfield, 2009; Tranfield et al., 2003 and related publications).

  16. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  17. What is the difference between a systematic review and a ...

    However, people can also use the phrase systematic literature review to refer to a literature review that is done in a fairly systematic way, but without the full rigor of a systematic review. For instance, for a systematic review, reviewers would strive to locate relevant unpublished studies in grey literature and possibly by contacting ...

  18. Systematic Review Overview

    What is a systematic review? A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a research question. 1 Systematic Reviews are research projects that provide new insight on a topic and are designed to minimize bias. The project creates accessible research that examines relevant literature, which aids decision makers by ...

  19. How-to conduct a systematic literature review: A quick guide for

    Method Article How-to conduct a systematic literature review

  20. Guidance to best tools and practices for systematic reviews

    Guidance to best tools and practices for systematic reviews

  21. A systematic literature review of the clinical and socioeconomic burden

    Search strategy. This systematic literature review was conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines [].Embase, MEDLINE and the Cochrane Library were searched for studies related to the clinical and socioeconomic burden of bronchiectasis (noncystic fibrosis bronchiectasis (NCFBE) and cystic fibrosis bronchiectasis (CFBE)) using ...

  22. Blockchain Forensics: A Systematic Literature Review of Techniques

    Hence, this paper provides a systematic literature review and examination of state-of-the-art studies in blockchain forensics to offer a comprehensive understanding of the topic. This paper provides a comprehensive investigation of the fundamental principles of blockchain forensics, exploring various techniques and applications for conducting ...

  23. Impaired glucose metabolism and the risk of vascular events and

    Systematic literature search was performed in PubMed, Embase, Cochrane Library on 21st March 2024 and via citation searching. Studies that comprised IS or TIA patients and exposures of impaired glucose metabolism were eligible. ... We conducted a systematic literature review and meta-analysis to extend our knowledge on the burden of IGM in ...

  24. Comparing Integrative and Systematic Literature Reviews

    Comparing Integrative and Systematic Literature Reviews

  25. Systematic and other reviews: criteria and complexities

    A systematic review follows explicit methodology to answer a well-defined research question by searching the literature comprehensively, evaluating the quantity and quality of research evidence rigorously, and analyzing the evidence to synthesize an answer to the research question. The evidence gathered in systematic reviews can be qualitative ...

  26. Development of an IoT occupant-centric metrics: A systematic literature

    Based on the results of a systematic literature review, a qualitative content analysis of available apps and semi-structured user and expert interviews, we derive a structural model with ...

  27. Politics & Policy

    Via an overview of Article 6 negotiations in past COP summits and a systematic review and qualitative meta-synthesis of the literature leading up to COP26, I identify three arguments involving Articles 6.2, 6.4, and 6.8 behind the selected literature.

  28. Educational psychologist practice in response to a critical incident: A

    Aim: This aggregative systematic literature review aims to explore available research evidence published between 2000 and 2018 on what educational psychologists (EPs) offer to schools and pupils following a critical incident (CI) and the reported efficacy of such services. Rationale: CIs are sudden and unexpected, impacting upon all areas of life, including school communities.

  29. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  30. New and Emerging Therapeutic Drugs for the Treatment of ...

    This study presents an overview of current and emerging PAH therapies through a systematic literature review. It involved an analysis of nine studies and a review of 800 papers from reputable journals published between 2013 and June 2023. The research focused on drug effects on the six-minute walk distance (6-MWD) and associated side effects in ...