7. Context

What to write

Contextual elements considered important at the outset of introducing the intervention(s)

Explanation

Context is known to affect the process and outcome of interventions to improve the quality of healthcare.1 This section of a report should describe the contextual factors that authors considered important at the outset of the improvement initiative. The goal of including information on context is twofold. First, describing the context in which the initiative took place is necessary to assist readers in understanding whether the intervention is likely to ‘work’ in their local environment, and, more broadly, the generalisability of the finding. Second, it enables the researchers to examine the role of context as a moderator of successful intervention(s). Specific and relevant elements of context thought to optimise the likelihood of success should be addressed in the design of the intervention, and plans should be made a priori to measure these factors and examine how they interact with the success of the intervention.

Describing the context within the methods section orients the reader to where the initiative occurred. In single-centre studies, this description usually includes information about the location, patient population, size, staffing, practice type, teaching status, system affiliation and relevant processes in place at the start of the intervention, as is demonstrated in the first example by Dandoy et al2 reporting a QI effort to reduce monitor alarms. Similar information is also provided in aggregate for multicentre studies. In the second example by Duncan et al,3 a table is used to describe the practice characteristics of the 21 participating paediatric primary care practices, and includes information on practice type, practice setting, practice size, patient characteristics and use of an electronic health record. This information can be used by the reader to assess whether his or her own practice setting is similar enough to the practices included in this report to enable extrapolation of the results. The authors state that they selected practices to achieve diversity in these key contextual factors. This was likely done so that the team could assess the effectiveness of the interventions in a range of settings and increase the generalisability of the findings.

Any contextual factors believed a priori would impact the success of their intervention should be specifically discussed in this section. Although the authors’ rationale is not explicitly stated, the example suggests that they had specific hypotheses about key aspects of a practice’s context that would impact implementation of the interventions. They addressed these contextual factors in the design of their study in order to increase the likelihood that the intervention would be successful. For example, they stated specifically that they selected practices with previous healthcare improvement experience and strong physician leadership. In addition, the authors noted that practices were recruited through an existing research consortium, indicating their belief that project sponsorship by an established external network could impact success of the initiative. They also noted that practices were made aware that American Board of Pediatrics Maintenance of Certification Part 4 credit had been applied for but not assured, implying that the authors believed incentives could impact project success. While addressing context in the design of the intervention may increase the likelihood of success, these choices limit the generalisability of the findings to other similar practices with prior healthcare improvement experience, strong physician leadership and available incentives.

This example could have been strengthened by using a published framework such as the Model for Understanding Success in Quality (MUSIQ),4 Consolidated Framework for Implementation Research (CFIR),1or the Promoting Action on Research Implementation in Health Services (PARiHS) model5 to identify the subset of relevant contextual factors that would be examined.4,6 The use of such frameworks is not a requirement but a helpful option for approaching the issue of context. The relevance of any particular framework can be determined by authors based on the focus of their work—MUSIQ was developed specifically for microsystem or organisational QI efforts, whereas CFIR and PARiHS were developed more broadly to examine implementation of evidence or other innovations.

If elements of context are hypothesised to be important, but are not going to be addressed specifically in the design of the intervention, plans to measure these contextual factors prospectively should be made during the study design phase. In these cases, measurement of contextual factors should be clearly described in the methods section, data about how contextual factors interacted with the interventions should be included in the results section, and the implications of these findings should be explored in the discussion. For example, if the authors of the examples below had chosen this approach, they would have measured participating team’s’ prior healthcare improvement experience and looked for differences in successful implementation based on whether practices had prior experience or not. In cases where context was not addressed prospectively, authors are still encouraged to explore the impact of context on the results of intervention(s) in the discussion section.

Examples

Example 1

CCHMC (Cincinnati Children’s Hospital Medical Center)is a large, urban pediatric medical center and the Bone Marrow Transplant (BMT) team performs 100 to 110 transplants per year. The BMT unit contains 24 beds and 60–70% of the patients on the floor are on cardiac monitors…The clinical providers…include 14 BMT attending physicians, 15 fellows, 7 NPs (nurse practitioners), and 6 hospitalists…The BMT unit employs ∼130 bedside RNs (registered nurses) and 30 PCAs(patient care assistants). Family members take an active role…2

Example 2

Pediatric primary care practices were recruited through the AAP QuIIN (American Academy of Pediatrics Quality Improvement Innovation Network) and the Academic Pediatric Association’s Continuity Research Network. Applicants were told that Maintenance of Certification (MOC) Part 4 had been applied for, but was not assured. Applicant practices provided information on their location, size, practice type, practice setting, patient population and experience with quality improvement (QI) and identified a 3-member physician-led core improvement team. …. Practices were selected to represent diversity in practice types, practice settings, and patient populations. In each selected practice the lead core team physician and in some cases the whole practice had previous QI experience…table 1 summarizes practice characteristics for the 21 project teams.3

Training

The UK EQUATOR Centre runs training on how to write using reporting guidelines.

Discuss this item

Visit this items’ discussion page to ask questions and give feedback.

References

1.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4(1). doi:10.1186/1748-5908-4-50
2.
Dandoy CE, Davies SM, Flesch L, et al. A team-based approach to reducing cardiac monitor alarms. Pediatrics. 2014;134(6):e1686-e1694. doi:10.1542/peds.2014-1162
3.
Duncan PM, Pirretti A, Earls MF, et al. Improving delivery of bright futures preventive services at the 9- and 24-month well child visit. Pediatrics. 2015;135(1):e178-e186. doi:10.1542/peds.2013-3119
4.
Kaplan HC, Provost LP, Froehle CM, Margolis PA. The model for understanding success in quality (MUSIQ): Building a theory of context in healthcare quality improvement. BMJ Quality & Safety. 2011;21(1):13-20. doi:10.1136/bmjqs-2011-000010
5.
Rycroft-Malone J, Seers K, Chandler J, et al. The role of evidence, context, and facilitation in an implementation trial: Implications for the development of the PARIHS framework. Implementation Science. 2013;8(1). doi:10.1186/1748-5908-8-28
6.
Bamberger re . Perspectives on context. The health foundation, 2014.

Reuse

Most of the reporting guidelines and checklists on this website were originally published under permissive licenses that allowed their reuse. Some were published with propriety licenses, where copyright is held by the publisher and/or original authors. The original content of the reporting checklists and explanation pages on this website were drawn from these publications with knowledge and permission from the reporting guideline authors, and subsequently revised in response to feedback and evidence from research as part of an ongoing scholarly dialogue about how best to disseminate reporting guidance. The UK EQUATOR Centre makes no copyright claims over reporting guideline content. Our use of copyrighted content on this website falls under fair use guidelines.

Citation

For attribution, please cite this work as:
Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25(12):986-992. doi:10.1136/bmjqs-2015-004411

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source

Assumptions

Reasons for choosing the activities and tools used to bring about changes in healthcare services at the system level. Source

Context

Physical and sociocultural makeup of the local environment (for example, external environmental factors, organizational dynamics, collaboration, resources, leadership, and the like), and the interpretation of these factors (“sense-making”) by the healthcare delivery professionals, patients, and caregivers that can affect the effectiveness and generalizability of intervention(s). Source

Ethical aspects

The value of system-level initiatives relative to their potential for harm, burden, and cost to the stakeholders. Potential harms particularly associated with efforts to improve the quality, safety, and value of healthcare services include opportunity costs, invasion of privacy, and staff distress resulting from disclosure of poor performance.

Generalizability

The likelihood that the intervention(s) in a particular report would produce similar results in other settings, situations, or environments (also referred to as external validity). Source

Healthcare improvement

Any systematic effort intended to raise the quality, safety, and value of healthcare services, usually done at the system level. We encourage the use of this phrase rather than “quality improvement,” which often refers to more narrowly defined approaches. Source

Inferences

The meaning of findings or data, as interpreted by the stakeholders in healthcare services - improvers, healthcare delivery professionals, and/or patients and families. Source

Initiative

A broad term that can refer to organization-wide programs, narrowly focused projects, or the details of specific interventions (for example, planning, execution, and assessment). Source

Internal validity

Demonstrable, credible evidence for efficacy (meaningful impact or change) resulting from introduction of a specific intervention into a particular healthcare system. Source

Interventions

The specific activities and tools introduced into a healthcare system with the aim of changing its performance for the better. Complete description of an intervention includes its inputs, internal activities, and outputs (in the form of a logic model, for example), and the mechanism(s) by which these components are expected to produce changes in a system's performance. Source #TODO check matches

Opportunity costs

Loss of the ability to perform other tasks or meet other responsibilities resulting from the diversion of resources needed to introduce, test, or sustain a particular improvement initiative. Source

Problem

Meaningful disruption, failure, inadequacy, distress, confusion or other dysfunction in a healthcare service delivery system that adversely affects patients, staff, or the system as a whole, or that prevents care from reaching its full potential. Source

process

The routines and other activities through which healthcare services are delivered. Source

Rationale

Explanation of why particular intervention(s) were chosen and why it was expected to work, be sustainable, and be replicable elsewhere. Source

Systems

The interrelated structures, people, processes, and activities that together create healthcare services for and with individual patients and populations. For example, systems exist from the personal self-care system of a patient, to the individual provider-patient dyad system, to the microsystem, to the macrosystem, and all the way to the market/social/insurance system. These levels are nested within each other. Source

Theory

Any “reason-giving” account that asserts causal relationships between variables (causal theory) or that makes sense of an otherwise obscure process or situation (explanatory theory). Theories come in many forms, and serve different purposes in the phases of improvement work. It is important to be explicit and well-founded about any informal and formal theory (or theories) that are used. Source