13. Intervention and comparator

What to write

Intervention and comparator with sufficient details to allow replication. If relevant, where additional materials describing the intervention and comparator (eg, intervention manual) can be accessed

Examples

“Each sulfadoxine–pyrimethamine course consisted of three tablets containing 500 mg of sulfadoxine and 25 mg of pyrimethamine (unbranded generic sulfadoxine–pyrimethamine, Medopharm, Chennai, India; quality controlled by Durbin, Hayes, UK) given as a single oral dose for 1 day (appendix p 2). Each dihydroartemisinin–piperaquine course was dosed according to the bodyweight of each participant and consisted of three to five tablets containing 40 mg of dihydroartemisinin and 320 mg of piperaquine (Alfasigma, Bologna, Italy), given orally once a day for 3 consecutive days. Each dose of azithromycin consisted of two tablets containing 500 mg (Universal Corporation, Nairobi, Kenya) given orally once daily for 2 consecutive days (cumulative dose of 2 g) at the same time as the first and second daily dose of dihydroartemisinin–piperaquine at enrolment. The placebo tablets were also provided by Universal Corporation and had the same appearance as active azithromycin (appendix p 2). The first daily dose was administered in the study clinic under the direct supervision of the study staff, combined with a slice of dry bread or a biscuit. The daily doses on the second and third days were self-administered at home at approximately the same time of the day and in a similar manner as the first dose taken under observation in the clinic.”1

“The experimental group received 6 sessions of standard OMT (osteopathic manipulative treatment), and the control group 6 sessions of sham OMT, each session at 2-week intervals. For both experimental and control groups, each session lasted 45 minutes and consisted of 3 periods: (1) interview focusing on pain location, (2) full osteopathic examination, and (3) intervention consisting of standard or sham OMT. Briefly, in both groups, practitioners assessed 7 anatomical regions for dysfunction (lumbar spine, root of mesentery, diaphragm, and atlantooccipital, sacroiliac, temporomandibular, and talocrural joints) and applied sham OMT to all areas or standard OMT to those that were considered dysfunctional. All health care providers were board-certified nonphysician, nonphysiotherapist osteopathic practitioners (Répertoire National de la Certification Professionnelle, niveau 1). They all received a 2-day training according to international standards to deliver both standard and sham OMT. Full descriptions of osteopathic practitioner training and interventions are provided in eAppendices 3 and 4 in Supplement 2. In both groups, pharmacological interventions, nonpharmacological interventions, and spinal surgery were allowed. Cointerventions were self-reported at 3, 6, and 12 months by use of a standardized checklist (eAppendix 5 in Supplement 2).”2

Explanation

Complete reporting of the intervention and comparator details is essential to enable readers to understand the study results and adequately translate them to clinical practice. Several studies have shown poor reporting of interventions and comparators in randomised trials.38912 Authors should describe each intervention thoroughly, including control interventions, or use of placebo procedure.13,14 The description should provide sufficient detail to allow replication, such as to allow a clinician wanting to use the intervention to know exactly how to administer the intervention/comparator that was evaluated in the trial.4 Key information includes the different components of the intervention/comparator, how and when it should be administered, the intervention/comparator material (ie, any physical or informational materials used in the intervention/comparator, including those provided to participants or used in intervention/comparator delivery or in training of providers and where it can be accessed (eg, online appendix, URL)); the procedure for tailoring the intervention/comparator to individual participants, and how fidelity15 (ie, the extent to which the intervention/comparator were implemented as planned in the protocol by care providers) or adherence15 (ie, the extent to which trial participants implement the intervention/comparator as planned in the protocol) were assessed or enhanced (see below for more examples).

Drug17

  • Generic name

  • Manufacturer

  • Dose

  • Route of administration (eg, oral, intravenous)

  • Timing

  • Titration regimen if applicable

  • Duration of administration

  • Procedure for tailoring the intervention to individual participants

  • Conditions under which interventions are withheld

  • Whether and how adherence of patients to the intervention was assessed or enhanced

  • Any physical or informational materials used in the intervention and where the materials can be accessed.

Rehabilitation, behavioural treatment, education, and psychotherapy, etc16,17

  • Qualitative information

    • Theory/rationale for essential intervention elements

    • Content of each session

    • Mode of delivery (individual/group, face to face/remote)

    • Whether the treatment is supervised

    • The content of the information exchanged with participants

    • The materials used to give information

    • Procedure for tailoring the intervention to individual participants

    • Whether and how the interventions were standardised

    • Background and expertise of individuals delivering the interventions

    • Whether the same care providers delivered interventions across trial groups

    • Whether and how adherence of individuals delivering the interventions to the protocol was assessed or enhanced

    • Whether and how adherence of patients to the intervention protocol was assessed and/or enhanced

    • Any physical or informational materials used in the intervention and where the materials can be accessed.

  • Quantitative information

    • Intensity of the intervention where appropriate

    • Number of sessions

    • Session schedule

    • Session duration

    • Duration of each main component of each session

    • Overall duration of the intervention.

Surgery, technical procedure, or implantable device16

  • Preoperative care relevant details

  • Intraoperative care relevant details

  • Configuration of any device

  • Postoperative care relevant details

  • Procedure for tailoring the intervention to individual participants

  • Whether and how the interventions were standardised

  • Background and expertise of individuals delivering the interventions

  • Whether the same care providers delivered interventions across trial groups

  • Whether and how adherence of individuals delivering the interventions to the protocol was assessed or enhanced

  • Any physical or informational materials used in the intervention and where the materials can be accessed.

Assessing fidelity and adherence can be complex and vary according to the intervention/comparator (eg, one-off, short term repeated, long term repeated). Various deviations to the protocol can occur. For example, participants might initiate the intervention but then discontinue the intervention completely and permanently after a specific period of time, discontinue the intervention temporarily, reduce the dose, or modify the schedule. If relevant, authors should provide the prespecified definition for classifying participants as being treated as planned or not.

In addition, authors should indicate whether criteria were used to guide intervention/comparator modifications and discontinuations and where applicable describe these criteria. This information could be particularly important to evaluate the risk of bias due to deviations from the intended interventions,18,19 an important domain of the risk-of-bias tool developed by Cochrane.18 Assessing this domain requires a clear understanding of, and ability to distinguish between, deviations that occur as planned in the protocol and deviations that arise due to the experimental context.

The research question (ie, explanatory v pragmatic) will affect the standardisation of the intervention/comparator as well as how adherence or fidelity is assessed or enhanced. In explanatory trials, the aim is to estimate treatment effect under ideal circumstances. The intervention/comparator are usually highly standardised with close monitoring of fidelity and adherence to the intervention/comparator and strategies to increase them. In contrast, pragmatic trials aim to determine treatment effect in clinical conditions. The intervention and comparator are usually highly flexible, and measurement of fidelity and adherence are unobstructive with no strategies to maintain or improve them. Nevertheless, assessing fidelity and adherence to the intervention/comparator, or at least recording the most important components of the intervention/comparator, is necessary to understand what was actually administered to participants. This is particularly important for complex interventions where diversity in the implementation of the intervention is expected. For example, in a pragmatic trial assessing a surgical procedure where the procedure is left to surgeons’ choice, investigators should plan to systematically record key elements related to pre-operative care, anaesthesia, the surgical approach, and post-operative care. This information is essential to provide a relevant description of the intervention that was actually provided when the trial is completed.

If the control group or intervention group received a combination of interventions, the authors should provide a thorough description of each intervention, an explanation of the order in which the combination of interventions were planned to be introduced or withdrawn, and the triggers for their introduction if applicable. Some complex interventions will require the development of specific documentation (eg, training materials, intervention manuals). Authors should make these available and indicate where they can be accessed.

If the control group is to receive usual care, it is important to describe what that constitutes so that readers can assess whether the comparator differs substantially from usual care in their own setting.20 Various approaches could be used: standardising usual care to be in line with specific guidelines; or asking practitioners to treat control patients according to their own preference, which could result in heterogeneity of the care provided particularly between centres and over time.21 Usual care can vary substantially across sites and patients, as well as over the duration of the trial. Further, it is important to clarify if the experimental group also received usual care in addition to the experimental intervention, and what actually differed between the groups. Usual care is frequently incompletely reported. In a review of 214 paediatric trials, the descriptions of standard of care were more often incomplete than the description of the intervention arms within the same study as measured by the TIDieR checklist (mean 5.81 (standard deviation (SD) 2.13) v 8.45 (SD 1.39)).11

If the control group is to receive a placebo, specific considerations need to be accounted for. Some evidence showed that placebos are insufficiently described.22 Placebo could have several different aspects from pills to saline injections or more complex interventions such as sham interventions (eg, sham surgery) or attention control interventions. Authors should report the same level of details as required for the intervention—that is, content of the placebo or different component of the placebo, how and when it should be administered, material, procedure for tailoring the placebo to individual participants, and how fidelity and adherence were assessed or enhanced.13 Complete reporting of the placebo is needed to understand what intervention effect is measured in the trial.23 A network meta-analysis of osteoarthritis trials showed that different placebo interventions (oral, intra-articular, topical, oral and topical) had different effects and can impact the relative effect estimate of active treatments.24

Further, the trial groups could receive different concomitant care in addition to the assigned trial interventions. Concomitant care could impact trial outcomes and bias effect estimates. To facilitate interpretation of study results and risk-of-bias assessments, authors should report relevant concomitant care that was allowed or prohibited where relevant.

Specific guidance has been developed to improve the reporting of interventions, particularly TIDieR,17 TIDieR-Placebo for placebo and sham controls,13 and the CONSORT extensions for non-pharmacological treatments.16 Authors could consult these for more detailed information.

Training

The UK EQUATOR Centre runs training on how to write using reporting guidelines.

Discuss this item

Visit this items’ discussion page to ask questions and give feedback.

References

1.
Madanitsa M, Barsosio HC, Minja DTR, et al. Effect of monthly intermittent preventive treatment with dihydroartemisinin–piperaquine with and without azithromycin versus monthly sulfadoxine–pyrimethamine on adverse pregnancy outcomes in africa: A double-blind randomised, partly placebo-controlled trial. The Lancet. 2023;401(10381):1020-1036. doi:10.1016/s0140-6736(22)02535-1
2.
Nguyen C, Boutron I, Zegarra-Parodi R, et al. Effect of osteopathic manipulative treatment vs sham treatment on activity limitations in patients with nonspecific subacute and chronic low back pain: A randomized clinical trial. JAMA Internal Medicine. 2021;181(5):620. doi:10.1001/jamainternmed.2021.0005
3.
Jacquier I, Boutron I, Moher D, Roy C, Ravaud P. The reporting of randomized clinical trials using a surgical intervention is in need of immediate improvement: A systematic review. Annals of Surgery. 2006;244(5):677-683. doi:10.1097/01.sla.0000242707.44007.80
4.
Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ. 2008;336(7659):1472-1474. doi:10.1136/bmj.39590.732037.47
5.
Duff JM, Leather H, Walden EO, LaPlant KD, George TJ. Adequacy of published oncology randomized controlled trials to provide therapeutic details needed for clinical application. JNCI: Journal of the National Cancer Institute. 2010;102(10):702-705. doi:10.1093/jnci/djq117
6.
Schroter S, Glasziou P, Heneghan C. Quality of descriptions of treatments: A review of published randomised controlled trials. BMJ Open. 2012;2(6):e001978. doi:10.1136/bmjopen-2012-001978
7.
Hoffmann TC, Erueti C, Glasziou PP. Poor description of non-pharmacological interventions: Analysis of consecutive sample of randomised trials. BMJ. 2013;347(sep10 1):f3755-f3755. doi:10.1136/bmj.f3755
8.
Abell B, Glasziou P, Hoffmann T. Reporting and replicating trials of exercise-based cardiac rehabilitation: Do we know what the researchers actually did? Circulation: Cardiovascular Quality and Outcomes. 2015;8(2):187-194. doi:10.1161/circoutcomes.114.001381
9.
Ndounga Diakou LA, Ntoumi F, Ravaud P, Boutron I. Avoidable waste related to inadequate methods and incomplete reporting of interventions: A systematic review of randomized trials performed in sub-saharan africa. Trials. 2017;18(1). doi:10.1186/s13063-017-2034-0
10.
Golomb BA, Erickson LC, Koperski S, Sack D, Enkin M, Howick J. What’s in placebos: Who knows? Analysis of randomized, controlled trials. Annals of Internal Medicine. 2010;153(8):532-535. doi:10.7326/0003-4819-153-8-201010190-00010
11.
Yu AM, Balasubramanaiam B, Offringa M, Kelly LE. Reporting of interventions and “standard of care” control arms in pediatric clinical trials: A quantitative analysis. Pediatric Research. 2018;84(3):393-398. doi:10.1038/s41390-018-0019-7
12.
Sanders S, Gibson E, Glasziou P, Hoffmann T. Nondrug interventions for reducing SARS-CoV-2 transmission are frequently incompletely reported. Journal of Clinical Epidemiology. 2023;157:102-109. doi:10.1016/j.jclinepi.2023.02.006
13.
Howick J, Webster RK, Rees JL, et al. TIDieR-placebo: A guide and checklist for reporting placebo and sham controls. PLOS Medicine. 2020;17(9):e1003294. doi:10.1371/journal.pmed.1003294
14.
Phillips WR, Sturgiss E, Hunik L, et al. Improving the reporting of primary care research: An international survey of researchers. The Journal of the American Board of Family Medicine. 2021;34(1):12-21. doi:10.3122/jabfm.2021.01.200266
15.
Dodd S, White IR, Williamson P. Nonadherence to treatment protocol in published randomised controlled trials: A review. Trials. 2012;13(1). doi:10.1186/1745-6215-13-84
16.
Boutron I, Altman DG, Moher D, Schulz KF, Ravaud P. CONSORT statement for randomized trials of nonpharmacologic treatments: A 2017 update and a CONSORT extension for nonpharmacologic trial abstracts. Annals of Internal Medicine. 2017;167(1):40-47. doi:10.7326/m17-0046
17.
Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687-g1687. doi:10.1136/bmj.g1687
18.
Sterne JAC, Savović J, Page MJ, et al. RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ. Published online August 2019:l4898. doi:10.1136/bmj.l4898
19.
Higgins j sterne j savović j . A revised tool for assessing risk of bias in randomized trials. In: Chandler j McKenzie j boutron i welch v , eds. Cochrane methods cochrane database of systematic reviews 2016, issue 10 (suppl 1). 2016: 29-32.
20.
Zuidgeest MGP, Welsing PMJ, Thiel GJMW van, et al. Series: Pragmatic trials and real world evidence: Paper 5. Usual care and real life comparators. Journal of Clinical Epidemiology. 2017;90:92-98. doi:10.1016/j.jclinepi.2017.07.001
21.
Turner KM, Huntley A, Yardley T, Dawson S, Dawson S. Defining usual care comparators when designing pragmatic trials of complex health interventions: A methodology review. Trials. 2024;25(1). doi:10.1186/s13063-024-07956-7
22.
Webster RK, Howick J, Hoffmann T, et al. Inadequate description of placebo and sham controls in a systematic review of recent trials. European Journal of Clinical Investigation. 2019;49(11). doi:10.1111/eci.13169
23.
Paterson C, Dieppe P. Characteristic and incidental (placebo) effects in complex interventions such as acupuncture. BMJ. 2005;330(7501):1202-1205. doi:10.1136/bmj.330.7501.1202
24.
Bannuru RR, McAlindon TE, Sullivan MC, Wong JB, Kent DM, Schmid CH. Effectiveness and implications of alternative placebo treatments: A systematic review and network meta-analysis of osteoarthritis trials. Annals of Internal Medicine. 2015;163(5):365-372. doi:10.7326/m15-0623

Reuse

Most of the reporting guidelines and checklists on this website were originally published under permissive licenses that allowed their reuse. Some were published with propriety licenses, where copyright is held by the publisher and/or original authors. The original content of the reporting checklists and explanation pages on this website were drawn from these publications with knowledge and permission from the reporting guideline authors, and subsequently revised in response to feedback and evidence from research as part of an ongoing scholarly dialogue about how best to disseminate reporting guidance. The UK EQUATOR Centre makes no copyright claims over reporting guideline content. Our use of copyrighted content on this website falls under fair use guidelines.

Citation

For attribution, please cite this work as:
Hopewell S, Chan AW, Collins GS, et al. CONSORT 2025 statement: updated guideline for reporting randomised trials. BMJ. 2025;389:e081123. doi:10.1136/bmj-2024-081123

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Asdfghj

sdfghjk