20a. Who was blinded

What to write

Who was blinded after assignment to interventions (eg, participants, care providers, outcome assessors, data analysts)

Examples

“Whereas patients and physicians allocated to the intervention group were aware of the allocated arm, outcome assessors and data analysts were kept blinded to the allocation.”1

“Blinding and equipoise were strictly maintained by emphasizing to intervention staff and participants that each diet adheres to healthy principles, and each is advocated by certain experts to be superior for long-term weight-loss. Except for the interventionists (dieticians and behavioural psychologists), investigators and staff were kept blind to diet assignment of the participants. The trial adhered to established procedures to maintain separation between staff that take outcome measurements and staff that deliver the intervention. Staff members who obtained outcome measurements were not informed of the diet group assignment. Intervention staff, dieticians and behavioural psychologists who delivered the intervention did not take outcome measurements. All investigators, staff, and participants were kept masked to outcome measurements and trial results.”2

“This was a double-blind study with limited access to the randomisation code . . . The treatment each patient received was not disclosed to the investigator, study site staff, patient, sponsor personnel involved with the conduct of the study (with the exception of the clinical supply staff and designated safety staff), or study vendors.”3

“Physicians, patients, nurses responsible for referring the patients, the statistician, also the investigators who rated the patients and administered the drugs, were all blinded to the allocation.”4

Explanation

The term “blinding” (masking) refers to withholding information about the assigned interventions from people involved in the trial who may potentially be influenced by this knowledge. Blinding is an important safeguard against bias, particularly when assessing subjective outcomes.5

Benjamin Franklin has been credited as being the first to use blinding in a scientific experiment.6 He blindfolded participants so they would not know when he was applying mesmerism (a popular healing technique of the 18th century) and in so doing demonstrated that mesmerism was a sham. Since then, the scientific community has widely recognised the power of blinding to reduce bias, and it has remained a commonly used strategy in scientific experiments.

The section on blinding terminology below defines the groups of individuals (ie, participants, healthcare providers, data collectors, outcome assessors, and data analysts) that can potentially introduce bias into a trial through knowledge of the treatment assignments. Participants may respond differently if they are aware of their treatment assignment (eg, respond more favourably when they receive the new treatment).5 Lack of blinding may also influence adherence with the intervention, use of co-interventions, and risk of dropping out of the trial.

Blinding terminology

For a technical term to be useful, its use and interpretation must be consistent. Authors of trials commonly use the term “double blind,” and less commonly the terms “single blind” or “triple blind.” A problem with this lexicon is that there is great variability in clinician interpretations and epidemiological textbook definitions of these terms.7 Moreover, a study of 200 randomised trials reported as double blind demonstrated 18 different combinations of groups actually blinded when the authors of these trials were surveyed, and approximately one in every five of these trials—reported as double blind—did not blind participants, healthcare providers, or data collectors.8

This research demonstrates that terms are ambiguous and, as such, authors and editors should abandon their usage in isolation without defining them. Authors should instead explicitly report the blinding status of the people involved for whom blinding may influence the validity of a trial.

The healthcare providers include all personnel (eg, physicians, chiropractors, physiotherapists, nurses) who care for the participants during the trial. Data collectors are the individuals who collect data on the trial outcomes. Outcome assessors are the individuals who determine whether a participant did experience the outcomes of interest.

Some researchers have also advocated blinding and reporting the blinding status of the data monitoring committee and the manuscript writers.9 Blinding of these groups is uncommon and the value of blinding them is debated.10

Sometimes one group of individuals (eg, the healthcare providers) is also the same individuals fulfilling another role in a trial (eg, the data collectors). Even if this is the case, the authors should state the blinding status of these groups to allow readers to judge the validity of the trial.

Unblinded healthcare providers may introduce similar biases; and unblinded data collectors may differentially assess outcomes (eg, frequency or timing), repeat measurements of abnormal findings, or provide encouragement during performance testing. Unblinded outcome assessors may differentially assess subjective outcomes, and unblinded data analysts may introduce bias through the choice of analytical strategies, such as the selection of favourable time points or outcomes and by decisions to remove patients from the analyses. These biases have been well documented.5,9,111314

Blinding, unlike allocation concealment (item 18), may not always be appropriate or possible. In pragmatic trials (trials that try to make the experience as close as real life so as to understand real world effectiveness), blinding of participants and healthcare providers would decrease the pragmatism of the trials, since patients in real life are not blinded.15 An example where blinding is impossible is a trial comparing levels of pain associated with sampling blood from the ear or thumb.16 However, in randomised trials for which blinding is possible, lack of blinding has usually been associated with empirical evidence of exaggeration in treatment effect estimates.175,1822 Blinding is particularly important when outcome measures involve some subjectivity, such as assessment of pain. Yet, blinding may not be as important in certain fields or with certain outcomes. For example, blinding of data collectors and outcome assessors is unlikely to matter for objective outcomes, such as death from any cause. Indeed, some methodological investigations have not found that lack of blinding is associated with empirical evidence of bias in treatment effect estimates.232429 Even then, however, lack of participant or healthcare provider blinding can lead to other problems, such as differential attrition.30 In certain trials, especially surgical trials, blinding of participants and healthcare providers is often difficult or impossible, but blinding of data collectors and outcome assessors for both benefits and harms is often achievable and recommended. For example, lesions can be photographed before and after treatment and assessed by an external observer.31 Regardless of whether blinding is possible, authors can and should always state who was blinded (ie, participants, healthcare providers, data collectors, data analysts, and/or outcome assessors).32,33

However, authors frequently do not report whether blinding was used.34,35 For example, reports of 51% of 506 trials in cystic fibrosis,36 33% of 196 trials in rheumatoid arthritis,37 and 38% of 68 trials in dermatology38 did not state whether blinding was used. Similarly, a more recent review found that the reports of 38% of 622 trials in high impact anaesthesiology journals did not explicitly describe the trial as blinded or non-blinded.39 Moreover, when describing some form of blinding, the most used term was the ambiguous “double blind.”39 Authors should explicitly state who was blinded, but only 14% of 622 trials explicitly reported whether the three key groups of individuals—that is, the participants, healthcare providers, and data collectors—were blinded or not.39 The rate did improve from 10% to 26% over the years of that review, but more improvement is needed. Until authors of trials improve their reporting of blinding, readers will have difficulty in judging its adequacy.

The term “masking” is sometimes used in preference to “blinding” to avoid confusion with the medical condition of being without sight. However, “blinding” in its methodological sense appears to be more universally understood worldwide and to be generally preferred for reporting clinical trials.30,31,40

Training

The UK EQUATOR Centre runs training on how to write using reporting guidelines.

Discuss this item

Visit this items’ discussion page to ask questions and give feedback.

References

1.
Smith SA, Shah ND, Bryant SC, et al. Chronic care model and shared care in diabetes: Randomized trial of an electronic decision support system. Mayo Clinic Proceedings. 2008;83(7):747-757. doi:10.4065/83.7.747
2.
Sacks FM, Bray GA, Carey VJ, et al. Comparison of weight-loss diets with different compositions of fat, protein, and carbohydrates. New England Journal of Medicine. 2009;360(9):859-873. doi:10.1056/nejmoa0804748
3.
Sandborn WJ, Vermeire S, Peyrin-Biroulet L, et al. Etrasimod as induction and maintenance therapy for ulcerative colitis (ELEVATE): Two randomised, double-blind, placebo-controlled, phase 3 studies. The Lancet. 2023;401(10383):1159-1171. doi:10.1016/s0140-6736(23)00061-2
4.
Padoei F, Mamsharifi P, Hazegh P, et al. The therapeutic effect of n‐acetylcysteine as an add‐on to methadone maintenance therapy medication in outpatients with substance use disorders: A randomized, double‐blind, placebo‐controlled clinical trial. Brain and Behavior. 2022;13(1). doi:10.1002/brb3.2823
5.
Wood L, Egger M, Gluud LL, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: Meta-epidemiological study. BMJ. 2008;336(7644):601-605. doi:10.1136/bmj.39465.451748.ad
6.
Kaptchuk TJ. Intentional ignorance: A history of blind assessment and placebo controls in medicine. Bulletin of the History of Medicine. 1998;72(3):389-433. doi:10.1353/bhm.1998.0159
7.
Devereaux PJ. Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials. JAMA. 2001;285(15):2000. doi:10.1001/jama.285.15.2000
8.
Haahr MT, Hróbjartsson A. Who is blinded in randomized clinical trials? A study of 200 trials and a survey of authors. Clinical Trials. 2006;3(4):360-365. doi:10.1177/1740774506069153
9.
Gøtzsche PC. Blinding during data analysis and writing of manuscripts. Controlled Clinical Trials. 1996;17(4):285-290. doi:10.1016/0197-2456(95)00263-4
10.
Meinert CL. Masked monitoring in clinical trials — blind stupidity? New England Journal of Medicine. 1998;338(19):1381-1382. doi:10.1056/nejm199805073381911
11.
Gøtzsche PC. Believability of relative risks and odds ratios in abstracts: Cross sectional study. BMJ. 2006;333(7561):231-234. doi:10.1136/bmj.38895.410451.79
12.
Guyatt GH, Pugsley SO, Sullivan MJ, et al. Effect of encouragement on walking test performance. Thorax. 1984;39(11):818-822. doi:10.1136/thx.39.11.818
13.
Karlowski TR. Ascorbic acid for the common cold: A prophylactic and therapeutic trial. JAMA. 1975;231(10):1038. doi:10.1001/jama.1975.03240220018013
14.
Noseworthy JH, Ebers GC, Vandervoort MK, Farquhar RE, Yetisir E, Roberts R. The impact of blinding on the results of a randomized, placebo‐controlled multiple sclerosis clinical trial. Neurology. 1994;44(1):16-16. doi:10.1212/wnl.44.1.16
15.
Janiaud P, Dal-Ré R, Ioannidis JPA. Assessment of pragmatism in recently published randomized clinical trials. JAMA Internal Medicine. 2018;178(9):1278. doi:10.1001/jamainternmed.2018.3321
16.
Carley SD, Libetta C, Flavin B, Butler J, Tong N, Sammy I. An open prospective randomised trial to reduce the pain of blood glucose testing: Ear versus thumb. BMJ. 2000;321(7252):20-20. doi:10.1136/bmj.321.7252.20
17.
Savović J, Turner RM, Mawdsley D, et al. Association between risk-of-bias assessments and results of randomized trials in cochrane reviews: The ROBES meta-epidemiologic study. American Journal of Epidemiology. 2017;187(5):1113-1122. doi:10.1093/aje/kwx344
18.
Martin GL, Trioux T, Gaudry S, Tubach F, Hajage D, Dechartres A. Association between lack of blinding and mortality results in critical care randomized controlled trials: A meta-epidemiological study*. Critical Care Medicine. 2021;49(10):1800-1811. doi:10.1097/ccm.0000000000005065
19.
Hrobjartsson A, Thomsen ASS, Emanuelsson F, et al. Observer bias in randomised clinical trials with binary outcomes: Systematic review of trials with both blinded and non-blinded outcome assessors. BMJ. 2012;344(feb27 2):e1119-e1119. doi:10.1136/bmj.e1119
20.
Hróbjartsson A, Thomsen ASS, Emanuelsson F, et al. Observer bias in randomized clinical trials with measurement scale outcomes: A systematic review of trials with both blinded and nonblinded assessors. Canadian Medical Association Journal. 2013;185(4):E201-E211. doi:10.1503/cmaj.120744
21.
Hróbjartsson A, Thomsen ASS, Emanuelsson F, et al. Observer bias in randomized clinical trials with time-to-event outcomes: Systematic review of trials with both blinded and non-blinded outcome assessors. International Journal of Epidemiology. 2014;43(3):937-948. doi:10.1093/ije/dyt270
22.
Hróbjartsson A, Emanuelsson F, Skou Thomsen AS, Hilden J, Brorson S. Bias due to lack of patient blinding in clinical trials. A systematic review of trials randomizing patients to blind and nonblind sub-studies. International Journal of Epidemiology. 2014;43(4):1272-1283. doi:10.1093/ije/dyu115
23.
Bialy L, Vandermeer B, Lacaze‐Masmonteil T, Dryden DM, Hartling L. A meta‐epidemiological study to examine the association between bias and treatment effects in neonatal trials. Evidence-Based Child Health: A Cochrane Review Journal. 2014;9(4):1052-1059. doi:10.1002/ebch.1985
24.
Armijo-Olivo S, Fuentes J, Costa BR da, Saltaji H, Ha C, Cummings GG. Blinding in physical therapy trials and its association with treatment effects: A meta-epidemiological study. American Journal of Physical Medicine & Rehabilitation. 2017;96(1):34-44. doi:10.1097/phm.0000000000000521
25.
Armijo-Olivo S, Dennett L, Arienti C, et al. Blinding in rehabilitation research: Empirical evidence on the association between blinding and treatment effect estimates. American Journal of Physical Medicine & Rehabilitation. 2020;99(3):198-209. doi:10.1097/phm.0000000000001377
26.
Zeraatkar D, Pitre T, Diaz-Martinez JP, et al. Impact of allocation concealment and blinding in trials addressing treatments for COVID-19: A methods study. American Journal of Epidemiology. 2023;192(10):1678-1687. doi:10.1093/aje/kwad131
27.
Moustgaard H, Clayton GL, Jones HE, et al. Impact of blinding on estimated treatment effects in randomised clinical trials: Meta-epidemiological study. BMJ. Published online January 2020:l6802. doi:10.1136/bmj.l6802
28.
Mouillet G, Efficace F, Thiery‐Vuillemin A, et al. Investigating the impact of open label design on patient‐reported outcome results in prostate cancer randomized controlled trials. Cancer Medicine. 2020;9(20):7363-7374. doi:10.1002/cam4.3335
29.
Anthon CT, Granholm A, Perner A, Laake JH, Møller MH. No firm evidence that lack of blinding affects estimates of mortality in randomized clinical trials of intensive care interventions: A systematic review and meta-analysis. Journal of Clinical Epidemiology. 2018;100:71-81. doi:10.1016/j.jclinepi.2018.04.016
30.
Schulz KF, Chalmers I, Altman DG. The landscape and lexicon of blinding in randomized trials. Annals of Internal Medicine. 2002;136(3):254-259. doi:10.7326/0003-4819-136-3-200202050-00022
31.
Day SJ. Statistics notes: Blinding in clinical trials and other studies. BMJ. 2000;321(7259):504-504. doi:10.1136/bmj.321.7259.504
32.
Boutron I, Estellat C, Guittet L, et al. Methods of blinding in reports of randomized controlled trials assessing pharmacologic treatments: A systematic review. Vallance P, ed. PLoS Medicine. 2006;3(10):e425. doi:10.1371/journal.pmed.0030425
33.
Boutron I, Guittet L, Estellat C, Moher D, Hróbjartsson A, Ravaud P. Reporting methods of blinding in randomized trials assessing nonpharmacological treatments. Ford I, ed. PLoS Medicine. 2007;4(2):e61. doi:10.1371/journal.pmed.0040061
34.
Montori VM, Bhandari M, Devereaux PJ, Manns BJ, Ghali WA, Guyatt GH. In the dark the reporting of blinding status in randomized controlled trials. Journal of Clinical Epidemiology. 2002;55(8):787-790. doi:10.1016/s0895-4356(02)00446-8
35.
Dechartres A, Trinquart L, Atal I, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in cochrane reviews: Research on research study. BMJ. Published online June 2017:j2490. doi:10.1136/bmj.j2490
36.
Cheng K, Smyth RL, Motley J, O’Hea U, Ashby D. Randomized controlled trials in cystic fibrosis (1966-1997) categorized by time, design, and intervention. Pediatric Pulmonology. 2000;29(1):1-7. doi:10.1002/(sici)1099-0496(200001)29:1<1::aid-ppul1>3.0.co;2-1
37.
Gøtzsche PC. Methodology and overt and hidden bias in reports of 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis. Controlled Clinical Trials. 1989;10(1):31-56. doi:10.1016/0197-2456(89)90017-2
38.
Adetugbo K, Williams H. How well are randomized controlled trials reported in the dermatology literature? Archives of Dermatology. 2000;136(3). doi:10.1001/archderm.136.3.381
39.
Penić A, Begić D, Balajić K, Kowalski M, Marušić A, Puljak L. Definitions of blinding in randomised controlled trials of interventions published in high-impact anaesthesiology journals: A methodological study and survey of authors. BMJ Open. 2020;10(4):e035168. doi:10.1136/bmjopen-2019-035168
40.
Lang. 2000;2.

Reuse

Most of the reporting guidelines and checklists on this website were originally published under permissive licenses that allowed their reuse. Some were published with propriety licenses, where copyright is held by the publisher and/or original authors. The original content of the reporting checklists and explanation pages on this website were drawn from these publications with knowledge and permission from the reporting guideline authors, and subsequently revised in response to feedback and evidence from research as part of an ongoing scholarly dialogue about how best to disseminate reporting guidance. The UK EQUATOR Centre makes no copyright claims over reporting guideline content. Our use of copyrighted content on this website falls under fair use guidelines.

Citation

For attribution, please cite this work as:
Hopewell S, Chan AW, Collins GS, et al. CONSORT 2025 statement: updated guideline for reporting randomised trials. BMJ. 2025;389:e081123. doi:10.1136/bmj-2024-081123

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Asdfghj

sdfghjk