The PRISMA 2020 reporting guideline for writing a systematic review and meta-analysis

The PRISMA 2020 reporting guideline helps authors write systematic reviews that can be understood and used by a wide audience. This page summarises PRISMA 2020 and how to use it.

PRISMA 2020: The Preferred Reporting Items for Systematic reviews and Meta-Analyses

Version: 1.1. This is the latest version ✅

How to use this reporting guideline

You can use reporting guidelines throughout your research process.

  • When writing: consult the full guidance when writing manuscripts, protocols, and applications. The summary below provides a useful overview, and each item links to fuller guidance with explanations and examples.
  • After writing: Complete a checklist and include it with your journal submission.
  • To learn: Use PRISMA 2020 and our training to develop as an academic and build writing skills.

However you use PRISMA 2020, please cite it.

Applicability criteria

You can use PRISMA 2020 if you are writing a systematic review of studies that evaluate the effects of health interventions, irrespective of the design of the included studies.

You can use this reporting guideline regardless of whether your systematic review included a synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or not (for example, because only one eligible study is identified).

Many of the items are also applicable to:

  • writing systematic reviews evaluating other kinds of interventions (such as social or educational interventions)
  • systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis).

You can also use this reporting guideline to review the reporting of a systematic review, but not for appraising the quality of its design or conduct.

Do not use PRISMA 2020 for:

There are several extensions which can be used in addition to PRISMA , these include:

Other reporting guidelines for different types of systematic review should be used instead of PRISMA, including:

  • PRISMA-Lsr for writing living systematic reviews
  • PRISMA-Scr for writing systematic scoping reviews
  • PRISMA-Dta for writing systematic reviews of diagnostic test accuracy studies
  • PRISMA-Ipd for writing systematic reviews using individual participant data

Other extensions can be found here

For appraising research, consider using the CASP Systematic Reviews with Meta-Analysis of RCTs Checklist

Summary of guidance

Although you should describe all items below, you can decide how to order and prioritize items most relevant to your study, findings, context, and readership whilst keeping your writing concise. You can read how PRISMA 2020 was developed in the FAQs.

Item name What to write
 Title and Abstract
1. Title Identify the report as a systematic review.
2. Abstract Include all items from the PRISMA 2020 for Abstracts checklist.
 Introduction
3. Rationale Describe the rationale for the review in the context of existing knowledge.
4. Objectives Provide an explicit statement of the objective(s) or question(s) the review addresses.
 Methods
5. Eligibility criteria Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses.
6. Information sources Specify all databases, registers, websites, organisations, reference lists and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted.
7. Search Present the full search strategies for all databases, registers and websites, including any filters and limits used.
8. Selection Process Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and, if applicable, details of automation tools used in the process.
9. Data collection process Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process.
 10. Data Items
10a. Outcomes List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect.
10b. Other Variables List and define all other variables for which data were sought (e.g. participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information.
11. Risk of bias in individual studies Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process.
12. Effect measures Specify for each outcome the effect measure(s) (e.g. risk ratio, mean difference) used in the synthesis or presentation of results.
 13. Synthesis Methods
13a. Deciding which studies were eligible for each synthesis Describe the processes used to decide which studies were eligible for each synthesis (e.g. tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item
13b. Data preparation methods Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions.
13c. Methods for tabulating or displaying results Describe any methods used to tabulate or visually display results of individual studies and syntheses.
13d. Synthesis methods Describe any methods used to synthesize results and provide a rationale for the choice(s). If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used.
13e. Methods for exploring heterogeneity Describe any methods used to explore possible causes of heterogeneity among study results (e.g. subgroup analysis, meta-regression).
13f. Sensitivity analyses Describe any sensitivity analyses conducted to assess robustness of the synthesized results.
14. Reporting bias assessment Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases).
15. Certainty assessment Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome.
 Results
 16. Study Selection
16a. Results of the search and selection process Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram.
16b. Excluded studies Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded.
17. Study characteristics Cite each included study and present its characteristics.
18. Risk of bias in studies Present assessments of risk of bias for each included study.
19. Results of individual studies For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g. confidence/credible interval), ideally using structured tables or plots.
 20. Results of Synthesis
20a. Summary of studies For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies.
20b. Statistical results Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g. confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect.
20c. Heterogeneity Present results of all investigations of possible causes of heterogeneity among study results.
20d. Sensitivity analyses Present results of all sensitivity analyses conducted to assess the robustness of the synthesized results.
21. Risk of reporting biases in syntheses Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed.
22. Certainty of evidence Present assessments of certainty (or confidence) in the body of evidence for each outcome assessed.
 Discussion
 23. Discussion
23a. General interpretation of the results Provide a general interpretation of the results in the context of other evidence.
23b. Limitations of included evidence Discuss any limitations of the evidence included in the review.
23c. Limitations of the review processes Discuss any limitations of the review processes used.
23d. Implications Discuss implications of the results for practice, policy, and future research.
 Other Information
 24. Registration and Protocol
24a. Registration Provide registration information for the review, including register name and registration number, or state that the review was not registered.
24b. Protocol Indicate where the review protocol can be accessed, or state that a protocol was not prepared.
24c. Amendments Describe and explain any amendments to information provided at registration or in the protocol.
25. Support Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review.
26. Competing Interests Declare any competing interests of review authors.
27. Availability of data, code, and other materials Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review.

Including the appropriate EQUATOR checklist as part of your submission goes a long way to help establish trust between authors, editors, and reviewers. That’s why our editorial team ensures that applicable reporting checklists are completed during the peer review process, with a completed checklist at submission greatly helping editors and peer reviewers to assess the work.

Adrian Aldcroft

Editor in Chief, BMJ Open

Ready to get started?

Systematic_review

A review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question.

Source

Statistical synthesis

The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan for a description of each method)

Meta-analysis of effect estimates

A statistical technique used to synthesize results when study effect estimates and their variances are available, yielding a quantitative summary of results.

Source

Outcome

An event or measurement collected for participants in a study (such as quality of life, mortality).

Result

The combination of a point estimate (such as a mean difference, risk ratio or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome.

Reports

Documents (paper or electronic) supplying information about a particular study. A report could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information.

Record

The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study

An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses.

Reuse

Most of the reporting guidelines and checklists on this website were originally published under permissive licenses that allowed their reuse. Some were published with propriety licenses, where copyright is held by the publisher and/or original authors. The original content of the reporting checklists and explanation pages on this website were drawn from these publications with knowledge and permission from the reporting guideline authors, and subsequently revised in response to feedback and evidence from research as part of an ongoing scholarly dialogue about how best to disseminate reporting guidance. The UK EQUATOR Centre makes no copyright claims over reporting guideline content. Our use of copyrighted content on this website falls under fair use guidelines.

Citation

For attribution, please cite this work as:
Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. PLOS Medicine. 2021;18(3):e1003583. doi:10.1371/journal.pmed.1003583

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Systematic_review

A review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question.

Source

Statistical synthesis

The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan for a description of each method)

Meta-analysis of effect estimates

A statistical technique used to synthesize results when study effect estimates and their variances are available, yielding a quantitative summary of results.

Source

Outcome

An event or measurement collected for participants in a study (such as quality of life, mortality).

Result

The combination of a point estimate (such as a mean difference, risk ratio or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome.

Reports

Documents (paper or electronic) supplying information about a particular study. A report could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information.

Record

The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study

An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses.