24a. Intervention and comparator as administered

What to write

What to write

Intervention and comparator as they were actually administered (eg, where appropriate, who delivered the intervention/comparator, whether participants adhered, whether they were delivered as intended (fidelity))

Examples

“Patients were randomly assigned to the P2Y12 inhibitor monotherapy group (aspirin plus a P2Y12 inhibitor for 3 months and thereafter a P2Y12 inhibitor alone) or to the DAPT group (aspirin plus a P2Y12 inhibitor for at least 12 months) in a 1:1 ratio . . . Overall adherence to the study protocol was 79.3% in the P2Y12 inhibitor monotherapy group and 95.2% in the DAPT group . . . The rates of P2Y12 inhibitor use were similar in both groups: 96.4% at 6 months and 95.0% at 12 months in the P2Y12 inhibitor monotherapy group and 98.1% at 6 months and 96.6% at 12 months in the DAPT group. The median duration of aspirin was 96 days (interquartile range, 88-118 days) in the P2Y12 inhibitor monotherapy group and 365 days (interquartile range, 363-365) in the DAPT group. The proportion of patients receiving aspirin beyond 3 months in the P2Y12 inhibitor monotherapy group was 14.4% at 6 months and 8.9% at 12 months.”1

“Most participants received treatment as allocated [Table 1]. Across intervention groups, high protocol adherence was achieved in terms of the delivery, type, and content for the injection, progressive exercise, and best practice advice interventions. 53 physiotherapists delivered corticosteroid injections to 329 (97%) participants and three doctors to ten (3%) participants. Progressive exercise was delivered by 104 physiotherapists to 339 participants and best practice advice was delivered by 83 physiotherapists to 324 participants. Two physiotherapists swapped groups during the trial because of staffing issues and delivered both interventions. We found no difference in attendance rates between those receiving progressive exercise or best practice advice and those who received the intervention in conjunction with corticosteroid injection [Table 1].”2

Table 1: Example of good reporting: Intervention received by treatment group Data are number (%) or participants unless stated otherwise. Adapted from Hopewell et al, 20212. IQR=interquartile range.
Best practice advice (n=174) Injection and best practice advice (n=178) Progressive exercise (n=174) Injection and progressive exercise (n=182)
Injection received — 16 8 (94) – - 171 (94)
Injection not received with reasons — 10 (6) – - 11 ( 6)
Received extra injection — 0 - 2 (1 )
Completed exercise treatment 162 (93) 162 (91) 138 (79) 139 (76)
Partial exercise completion — – - 29 ( 17) 33 ( 18)
Median (IQR) number of sessions 1 (1-1) 1 (1-1) 4 (3-6) 4 (3-5)
Completed session 1 162 (93) 162 (91) 167 (96) 172 (95)
Completed session 2 — – - 161 (93) 160 (88)
Participants who received additional sessions 3 (2) 5 (3) 3 (2) 2 (1)

Explanation

This new item has been added to the CONSORT 2025 checklist to address the poor reporting of the intervention and comparator in randomised trials.38 For example, in a review of 102 randomised trials evaluating bariatric surgery, only 14% reported the intervention as implemented.9 A review of 192 randomised trials assessing pharmacological treatments in six major chronic diseases published in journals with high impact factors showed that adherence to medication was reported in only one third of the publications.10 A review of 100 randomised trial reports published in general medical journals with high impact factors highlighted that only 11% of the trials assessing long term interventions and 38% of those assessing short term interventions adequately reported treatment initiation and completeness of treatment.11 A review of 111 randomised trials reports showed that only 46% reported adherence results.12 A review of 94 placebo/sham controlled randomised trials published in high impact journals showed that only 54% reported actual adherence or fidelity.8

There is frequently a gap between the intervention/comparator as planned and described in the trial protocol and how the intervention/comparator were actually administered. This gap could be related to poor fidelity, which can be driven by the extent to which the intervention/comparator are implemented as planned in the protocol by practitioners, and/or poor adherence to treatment, defined as the extent to which participants comply with the care providers’ recommendations (eg, taking a drug, placebo, behavioural change, doing exercises).11,13 This gap could also be related to the expected diversity in the implementation of the intervention/comparator in clinical practice particularly for complex interventions.

The gap between the intervention/comparator as planned and as delivered also depends on how the trial was planned. In explanatory trials, the aim is to estimate treatment effect under ideal circumstances. The intervention/comparator are usually highly standardised with close monitoring of fidelity and adherence to interventions and strategies to increase them. Intensive efforts to maximise fidelity and adherence in early phase trials or explanatory trials may lead to unrealistic, inflated estimates of treatment benefit that cannot be reproduced under real life circumstances.14,15 Reporting the results of this monitoring is essential to allow readers to interpret the study results.

In contrast, pragmatic trials aim to determine treatment effect in clinical conditions. The intervention and comparator are usually highly flexible, and measurement of fidelity and adherence is unobstructive with no strategies to maintain or improve them.

Reporting how the intervention/comparator were actually administered is nevertheless crucial to allow readers to accurately interpret the trial results. For example, in a large international randomised trial comparing endarterectomy to medical management for patients with symptomatic carotid artery stenosis, there were important differences in the delay in receiving the surgical procedure which impacted the outcomes.16

Authors should provide details on who actually delivered the intervention/comparator (number and expertise), how the intervention/comparator were delivered, what was actually administered, participants’ adherence to treatment, and the caregiver’s fidelity to the intervention/comparator protocol where appropriate. Reporting fidelity and adherence can be complex and vary according to the type of intervention or comparator (eg, one-off, short term repeated, long term repeated). Various deviations to the protocol can occur. Participants might initiate the intervention/comparator but then discontinue the intervention/comparator permanently and completely after a specific period of time, discontinue temporarily, reduce the dose, or modify the schedule.

More detailed information is available in TIDieR17 and the CONSORT extension for non-pharmacological treatments18 (item 13).

Training

The UK EQUATOR Centre runs training on how to write using reporting guidelines.

Discuss this item

Visit this items’ discussion page to ask questions and give feedback.

References

1.
Hahn JY, Song YB, Oh JH, et al. Effect of P2Y12 inhibitor monotherapy vs dual antiplatelet therapy on cardiovascular events in patients undergoing percutaneous coronary intervention: The SMART-CHOICE randomized clinical trial. JAMA. 2019;321(24):2428. doi:10.1001/jama.2019.8146
2.
Hopewell S, Keene DJ, Marian IR, et al. Progressive exercise compared with best practice advice, with or without corticosteroid injection, for the treatment of patients with rotator cuff disorders (GRASP): A multicentre, pragmatic, 2 × 2 factorial, randomised controlled trial. The Lancet. 2021;398(10298):416-428. doi:10.1016/s0140-6736(21)00846-1
3.
Jacquier I, Boutron I, Moher D, Roy C, Ravaud P. The reporting of randomized clinical trials using a surgical intervention is in need of immediate improvement: A systematic review. Annals of Surgery. 2006;244(5):677-683. doi:10.1097/01.sla.0000242707.44007.80
4.
Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ. 2008;336(7659):1472-1474. doi:10.1136/bmj.39590.732037.47
5.
Duff JM, Leather H, Walden EO, LaPlant KD, George TJ. Adequacy of published oncology randomized controlled trials to provide therapeutic details needed for clinical application. JNCI: Journal of the National Cancer Institute. 2010;102(10):702-705. doi:10.1093/jnci/djq117
6.
Schroter S, Glasziou P, Heneghan C. Quality of descriptions of treatments: A review of published randomised controlled trials. BMJ Open. 2012;2(6):e001978. doi:10.1136/bmjopen-2012-001978
7.
Hoffmann TC, Erueti C, Glasziou PP. Poor description of non-pharmacological interventions: Analysis of consecutive sample of randomised trials. BMJ. 2013;347(sep10 1):f3755-f3755. doi:10.1136/bmj.f3755
8.
Webster RK, Howick J, Hoffmann T, et al. Inadequate description of placebo and sham controls in a systematic review of recent trials. European Journal of Clinical Investigation. 2019;49(11). doi:10.1111/eci.13169
9.
Liu M, Chen J, Wu Q, Zhu W, Zhou X. Adherence to the CONSORT statement and extension for nonpharmacological treatments in randomized controlled trials of bariatric surgery: A systematic survey. Obesity Reviews. 2021;22(8). doi:10.1111/obr.13252
10.
Gossec L, Dougados M, Tubach F, Ravaud P. Reporting of adherence to medication in recent randomized controlled trials of 6 chronic diseases: A systematic literature review. The American Journal of the Medical Sciences. 2007;334(4):248-254. doi:10.1097/maj.0b013e318068dde8
11.
Dodd S, White IR, Williamson P. Nonadherence to treatment protocol in published randomised controlled trials: A review. Trials. 2012;13(1). doi:10.1186/1745-6215-13-84
12.
Zhang Z, Peluso MJ, Gross CP, Viscoli CM, Kernan WN. Adherence reporting in randomized controlled trials. Clinical Trials. 2013;11(2):195-204. doi:10.1177/1740774513512565
13.
Persch AC, Page SJ. Protocol development, treatment fidelity, adherence to treatment, and quality control. The American Journal of Occupational Therapy. 2013;67(2):146-153. doi:10.5014/ajot.2013.006213
14.
Beets MW, Klinggraeff L von, Burkart S, et al. Impact of risk of generalizability biases in adult obesity interventions: A meta‐epidemiological review and meta‐analysis. Obesity Reviews. 2021;23(2). doi:10.1111/obr.13369
15.
Beets MW, Weaver RG, Ioannidis JPA, et al. Identification and evaluation of risk of generalizability biases in pilot versus efficacy/effectiveness trials: A systematic review and meta-analysis. International Journal of Behavioral Nutrition and Physical Activity. 2020;17(1). doi:10.1186/s12966-020-0918-y
16.
Rothwell PM. External validity of randomised controlled trials: “To whom do the results of this trial apply?” The Lancet. 2005;365(9453):82-93. doi:10.1016/s0140-6736(04)17670-8
17.
Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687-g1687. doi:10.1136/bmj.g1687
18.
Boutron I, Altman DG, Moher D, Schulz KF, Ravaud P. CONSORT statement for randomized trials of nonpharmacologic treatments: A 2017 update and a CONSORT extension for nonpharmacologic trial abstracts. Annals of Internal Medicine. 2017;167(1):40-47. doi:10.7326/m17-0046

Reuse

Most of the reporting guidelines and checklists on this website were originally published under permissive licenses that allowed their reuse. Some were published with propriety licenses, where copyright is held by the publisher and/or original authors. The original content of the reporting checklists and explanation pages on this website were drawn from these publications with knowledge and permission from the reporting guideline authors, and subsequently revised in response to feedback and evidence from research as part of an ongoing scholarly dialogue about how best to disseminate reporting guidance. The UK EQUATOR Centre makes no copyright claims over reporting guideline content. Our use of copyrighted content on this website falls under fair use guidelines.

Citation

For attribution, please cite this work as:
Hopewell S, Chan AW, Collins GS, et al. CONSORT 2025 statement: updated guideline for reporting randomised trials. BMJ. 2025;389:e081123. doi:10.1136/bmj-2024-081123

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Asdfghj

sdfghjk