What to write
Limits to the generalisability of the work
Factors that might have limited internal validity such as confounding, bias, or imprecision in the design, methods, measurement, or analysis
Efforts made to minimise and adjust for limitations
Explanation
The limitations section offers an opportunity to present potential weaknesses of the study, explain the choice of methods, measures and intervention, and examine why results may not be generalisable beyond the context in which the work occurred. In the first example, a study of family activated METs, Brady et al identified a number of issues that might influence internal validity and the extent to which their findings are generalisable to other hospitals. The success of METs, and the participation of family members in calling these teams, may depend on contextual attributes such as leadership involvement. Although few hospitals have implemented family activated METs, the growing interest in patient and family engagement may also contribute to a broader use of this intervention. There are no data available to assess the secular trends in these practices that might suggest the observed changes resulted from external factors.
There were few family activated MET calls. This positive result may stem from family education, but the authors report that they had limited data on such education. The lack of a validated tool to capture chart review information is noted as a potential weakness since some non-clinical MET calls might not have been recorded in the chart. The authors also note that the observed levels of family activated MET calls are consistent with other literature.
The impact of improvement interventions often varies with context, but the large number of potential factors to consider requires that researchers focus on a limited set of contextual measures they believe may influence success and future adaptation and spread. In the second example given, Dixon-Woods et al assessed variation in results of the implementation of the central line bundle to reduce catheter-related bloodstream infections in English ICUs.1 While English units made improvements, the results were not as impressive as in the earlier US experience. The researchers point to the prior experiences of staff in the English ICUs in several infection control campaigns, as contributing to this difference. Many English clinicians viewed the new programme as redundant, believing this was a problem already solved. The research team also notes that some of the English ICUs did not have an organisational culture that supported consistent implementation of the required changes.
Dixon-Woods et al relied on quantitative data on clinical outcomes as well as observation and qualitative interviews with staff. However, as they report, their study had several limitations. Their visits to the units were not longitudinal, so changes could have been made in some units after the researchers’ observations. They did not carry out systematic audits of culture and practices that might have revealed additional information, nor did they assess the impact of local factors including the size of the unit, the number of doctors and nurses, and other factors that might have affected the capability of the unit to implement new practices. Moreover, while the study included controls, there was considerable public and professional interest in these issues, which may have influenced performance and reduced the relative impact of the intervention. The authors’ report1 of the context and limitations is crucial to assist the reader in assessing their results, and in identifying factors that might influence results of similar interventions elsewhere.
Examples
Example 1
Our study had several limitations. Our study of family MET activations compared performance with our historical controls, and we were unable to adjust for secular trends or unmeasured confounders. Our improvement team included leaders of our MET committee and patient safety, and we are not aware of any ongoing improvement work or systems change that might have affected family MET calls. We performed our interventions in a large tertiary care children’s hospital with a history of improvement in patient safety and patient-centred and family-centred care.
Additionally, it is uncertain and likely very context-dependent as to what is the ‘correct’ level of family-activated METs. This may limit generalizability to other centres, although the consistently low rate of family MET calls in the literature in a variety of contexts should reduce concerns related to responding team workload. We do not have process measures of how often MET education occurred for families and of how often families understood this information or felt empowered to call. This results in a limited understanding of the next best steps to improve family calling. Our data were collected in the course of clinical care with chart abstraction from structured clinical notes. Given this, it is possible that notes were not written for family MET calls that were judged ‘nonclinical.’ From our knowledge of the MET system, we are confident such calls are quite few, but we lack the data to quantify this. Our chart review for the reasons families called did not use a validated classification tool as we do not believe one exists. This is somewhat mitigated by our double independent reviews that demonstrated the reliability of our classification scheme.2
Example 2
Our study has a number of important limitations. Our ethnographic visits to units were not longitudinal, but rather snapshots in time; changes in response to the program could have occurred after our visits. We did not conduct a systematic audit of culture and practices, and thus some inaccuracies in our assessments may be present. We did not evaluate possible modifiers of effect of factors such as size of unit, number of consultants and nurses, and other environmental features. We had access to ICUs’ reported infection rates only if they provided them directly to us; for information governance reasons, these rates could not be verified. It is possible that we have offered too pessimistic an interpretation of whether Matching Michigan ‘worked’: the quantitative evaluation may have underestimated the effects of the program (or over-estimated the secular trend), since the ‘waiting’ clusters were not true controls that were unexposed to the interventions. …1
Training
The UK EQUATOR Centre runs training on how to write using reporting guidelines.
Discuss this item
Visit this items’ discussion page to ask questions and give feedback.