27. Harms

What to write

All harms or unintended events in each group

Example

“Few women vomited after drug administration. 12 (0.2%) of 6685 sulfadoxine–pyrimethamine, 19 (0.3%) of 7014 dihydroartemisinin–piperaquine, and 23 (0.3%) of 6849 dihydroartemisinin–piperaquine plus azithromycin treatment courses were vomited within 30 min [Table 1]. One (0.1%) of 1552 women in the sulfadoxine–pyrimethamine group, two (0.1%) of 1558 in the dihydroartemisinin–piperaquine group, and four (0.3%) of 1556 in the dihydroartemisinin–piperaquine plus azithromycin group vomited after their first course of treatment (the only course when azithromycin was coadministered with dihydroartemisinin–piperaquine; Table 1). All three regimens were well tolerated ([Table 1] . . . ), but vomiting, nausea, and dizziness were more common in the first 3 days after dihydroartemisinin–piperaquine (13 [3.2%], 14 [3.4%], and 15 [3.7%] of 410 women visited at home, respectively) than sulfadoxine–pyrimethamine (one [0.3%], one [0.3%], and zero [0%] of 384 women visited at home, respectively; appendix pp 20–21). The addition of azithromycin to dihydroartemisinin–piperaquine was associated with significantly more vomiting than with dihydroartemisinin–piperaquine alone (p=0.0033; [Table 1]).”1

Table 1: Example of good reporting: Safety and tolerability endpoints (incidence measures, events (incidence per person year at risk) or prevalence. Data are number (%) unless stated otherwise. Adapted from Madanitsa et al.1 Number (%)/total number.
Endpoint Treatment
Sulfadoxine-pyrimethamine Dihydroartemisinin-piperaquine Dihydroartemisinin-piperaquine plus azithromycin
Adverse events
Dizziness 4 (0.8) 51 (9.5) 38 (7.2)
Vomiting 4 (0.8) 41 (7.7) 71 (13.5)
Nausea 2 (0.4) 44 (8.2) 35 (6.7)
Abdominal pain 6 (1.1) 11 (2.1) 14 (2.7)
Diarrhoea 2 (0.4) 5 (0.9) 6 (1.1)
Headache 12 (2.3) 17 (3.2) 18 (3.4)
Rash 0 (0.0) 2 (0.4) 0 (0.0)
Serious adverse events and grade 3-4 adverse events (in pregnant women)
Any 95 (17.7) 79 (14.8) 92 (16.9)
Maternal mortality* 1/1553 (0.1) 2/1561 (0.1) 3/1557 (0.2)
By system organ class
Blood and lymphatic system disorders 2 (0.4) 0 (0.0) 2 (0.4)

Explanation

Readers need information about the harms as well as the benefits of interventions to make rational and balanced decisions. Randomised trials offer an excellent opportunity for providing harms data, although they cannot detect differences in uncommon or rare harms between treatment groups. The existence and nature of adverse effects can have a major impact on whether a particular intervention will be deemed acceptable and useful. Not all reported adverse events observed during a trial are necessarily a consequence of the intervention; some may be a consequence of the condition being treated. Nevertheless, they all need to be reported.

Many reports of randomised trials provide inadequate information on harms. A comparison between harm data submitted to the trials database of the National Cancer Institute, which sponsored the trials, and the information reported in journal articles found that low grade adverse events were under-reported in journal articles. High grade events (Common Toxicity Criteria grades 3 to 5) were reported inconsistently in the articles and the information regarding attribution to investigational drugs was incomplete.2 Moreover, a review of trials published in six general medical journals in 2006 to 2007 found that while 89% of 133 reports mentioned adverse events, no information on severe adverse events and withdrawal of patients owing to an adverse event was given in 27% and 48% of articles, respectively.3 In a later review of 196 randomised trials of invasive pain treatments published in six major journals, 76% provided the denominators for analyses on harms and 85% reported the absolute risk per arm and per adverse event type, grade, and seriousness, and presented appropriate metrics.4

For non-systematically assessed harms, reporting can be more complex as the information is not standardised. A common approach is to code the event declared by participants. Authors should report the coding system used, whether coding was prespecified in the protocol, in the statistical analysis plan, or post hoc, and whether coding was performed by researchers blinded to the treatment allocated. In addition, there is a risk of under-reporting and selective non-reporting of harms particularly for non-systematically assessed harms. A reanalysis of individual participant data from six randomised trials of gabapentin found evidence of important harms that were not disclosed in the published reports but identified after data sharing and reanalysis.5 Sharing of de-identified individual participant data may be needed to be able to adequately synthesise this information, for example for inclusion in a systematic review.6,78

Authors should report for each group, the number of participants at risk, the number of deaths, the number of participants withdrawn due to harms, the number of participants with at least one harm event, and the number of events, if appropriate. Where appropriate, the estimated effect size with its precision (such as 95% CIs) should be reported including both absolute and relative effects for binary outcomes. It is important to separate the reporting of systematically and non-systematically assessed harms. Systematically assessed harms should be reported even if zero events were identified. It should also be clear whether the authors are reporting the number of participants with at least one harm event or the number of events per unit of time at risk and whether recurrent events were included. The number of participants withdrawn because of harms should also be reported for each group. Finally, results should be reported for all harms. We strongly discourage the use of thresholds or criteria to select which harms should be reported. All harms could be detailed in supplementary materials.

We recommend reporting the results in a table with the results for each trial arm.9 More detailed information can be found in the CONSORT statement extension for harms, which was updated in 2022.10

Training

The UK EQUATOR Centre runs training on how to write using reporting guidelines.

Discuss this item

Visit this items’ discussion page to ask questions and give feedback.

References

1.
Madanitsa M, Barsosio HC, Minja DTR, et al. Effect of monthly intermittent preventive treatment with dihydroartemisinin–piperaquine with and without azithromycin versus monthly sulfadoxine–pyrimethamine on adverse pregnancy outcomes in africa: A double-blind randomised, partly placebo-controlled trial. The Lancet. 2023;401(10381):1020-1036. doi:10.1016/s0140-6736(22)02535-1
2.
Scharf O, Colevas AD. Adverse event reporting in publications compared with sponsor database for cancer clinical trials. Journal of Clinical Oncology. 2006;24(24):3933-3938. doi:10.1200/jco.2005.05.3959
3.
Archives of Internal Medicine. 2009;169(19):1756. doi:10.1001/archinternmed.2009.306
4.
Williams MR, McKeown A, Pressman Z, et al. Adverse event reporting in clinical trials of intravenous and invasive pain treatments: An ACTTION systematic review. The Journal of Pain. 2016;17(11):1137-1149. doi:10.1016/j.jpain.2016.07.006
5.
Mayo-Wilson E, Qureshi R, Hong H, Chen X, Li T. Harms were detected but not reported in six clinical trials of gabapentin. Journal of Clinical Epidemiology. 2023;164:76-87. doi:10.1016/j.jclinepi.2023.10.014
6.
Qureshi R, Mayo-Wilson E, Li T. Harms in systematic reviews paper 1: An introduction to research on harms. Journal of Clinical Epidemiology. 2022;143:186-196. doi:10.1016/j.jclinepi.2021.10.023
7.
Qureshi R, Mayo-Wilson E, Rittiphairoj T, McAdams-DeMarco M, Guallar E, Li T. Harms in systematic reviews paper 2: Methods used to assess harms are neglected in systematic reviews of gabapentin. Journal of Clinical Epidemiology. 2022;143:212-223. doi:10.1016/j.jclinepi.2021.10.024
8.
Qureshi R, Mayo-Wilson E, Rittiphairoj T, McAdams-DeMarco M, Guallar E, Li T. Harms in systematic reviews paper 3: Given the same data sources, systematic reviews of gabapentin have different results for harms. Journal of Clinical Epidemiology. 2022;143:224-241. doi:10.1016/j.jclinepi.2021.10.025
9.
Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. Dickersin K, ed. PLoS Medicine. 2013;10(12):e1001566. doi:10.1371/journal.pmed.1001566
10.
Junqueira DR, Zorzela L, Golder S, et al. CONSORT harms 2022 statement, explanation, and elaboration: Updated guideline for the reporting of harms in randomised trials. BMJ. Published online April 2023:e073725. doi:10.1136/bmj-2022-073725

Reuse

Most of the reporting guidelines and checklists on this website were originally published under permissive licenses that allowed their reuse. Some were published with propriety licenses, where copyright is held by the publisher and/or original authors. The original content of the reporting checklists and explanation pages on this website were drawn from these publications with knowledge and permission from the reporting guideline authors, and subsequently revised in response to feedback and evidence from research as part of an ongoing scholarly dialogue about how best to disseminate reporting guidance. The UK EQUATOR Centre makes no copyright claims over reporting guideline content. Our use of copyrighted content on this website falls under fair use guidelines.

Citation

For attribution, please cite this work as:
Hopewell S, Chan AW, Collins GS, et al. CONSORT 2025 statement: updated guideline for reporting randomised trials. BMJ. 2025;389:e081123. doi:10.1136/bmj-2024-081123

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Asdfghj

sdfghjk