Item Description | Location (or reason for not reporting) | |
Title or abstract | ||
1. Identification as a study of diagnostic accuracy | Identification as a study of diagnostic accuracy using at least one measure of accuracy (such as sensitivity, specificity, predictive values, or AUC). | |
Abstract | ||
2. Abstract | Structured summary of study design, methods, results and conclusions (for specific guidance, see STARD for Abstracts). | |
Introduction | ||
3. Background | Scientific and clinical background, including the intended use and clinical role of the index test. | |
4. Objectives | Study objectives and hypotheses. | |
Methods | ||
5. Study design | Whether data collection was planned before the index test and reference standard were performed (prospective study) or after (retrospective study). | |
Participants | ||
6. Eligibility criteria | Eligibility criteria. | |
7. Identifying eligible participants | On what basis potentially eligible participants were identified (such as symptoms, results from previous tests, inclusion in registry). | |
8. Setting, location, and dates | Where and when potentially eligible participants were identified (setting, location and dates). | |
9. Consecutive, random or convenience series | Whether participants formed a consecutive, random or convenience series. | |
Test Methods | ||
10. Index test & Reference standard | 10a. Index test 10b. Reference standard |
|
11. Reference standard rationale | Rationale for choosing the reference standard (if alternatives exist). | |
12. Index test and reference standard cut-offs or categories | 12a. Definition of and rationale for test positivity cut-offs or result categories of the index test, distinguishing prespecified from exploratory. 12b. Definition of and rationale for test positivity cut-offs or result categories of the reference standard, distinguishing prespecified from exploratory. |
|
13. Information available to performers or readers of the index test and reference standard assessors | 13a. Whether clinical information and reference standard results were available to the performers or readers of the index test. 13b. Whether clinical information and index test results were available to the assessors of the reference standard. |
|
Analysis | ||
14. Analysis methods | Methods for estimating or comparing measures of diagnostic accuracy. | |
15. Indeterminate results | How indeterminate index test or reference standard results were handled. | |
16. Missing data | How missing data on the index test and reference standard were handled. | |
17. Variability | Any analyses of variability in diagnostic accuracy, distinguishing prespecified from exploratory. | |
18. Intended sample size | Intended sample size and how it was determined. | |
Results | ||
Participants | ||
19. Participant flow diagram | Flow of participants, using a diagram. | |
20. Baseline characteristics | Baseline demographic and clinical characteristics of participants. | |
21a. Participants with and without the target condition | 21a. Distribution of severity of disease in those with the target condition. 21b. Distribution of alternative diagnoses in those without the target condition |
|
22. Time interval | Time interval and any clinical interventions between index test and reference standard. | |
Test Results | ||
23. Index test and reference standard results | Cross tabulation of the index test results (or their distribution) by the results of the reference standard. | |
24. Estimates of accuracy | Estimates of diagnostic accuracy and their precision (such as 95% CIs). | |
25. Adverse events | Any adverse events from performing the index test or the reference standard. | |
Discussion | ||
26. Limitations | Study limitations, including sources of potential bias, statistical uncertainty and generalisability. | |
27. Implications for Practice | Implications for practice, including the intended use and clinical role of the index test. | |
Other information | ||
28. Registration | Registration number and name of registry. | |
29. Protocol | Where the full study protocol can be accessed. | |
30. Funding | Sources of funding and other support; role of funders. |
If you have not used a reporting guideline before, read about how and why to use them and check whether STARD is the most applicable reporting guideline for your work.
Reporting guidelines are most useful when used early in research. When writing a manuscript or application, consider using the Full Guidance where you’ll see explanations and examples for each item.
After writing, demonstrate adherence by completing this checklist:
1 How to specify where content is
Tell the reader where they can find information. E.g.,
- Results; paragraph 2
- Methods, Participants; paragraphs 1 & 2.
- Table 3
- Supplement B, para. 4
If you have chosen not to describe an item, explain why. You can do this in the checklist, or as a note below it.
You can describe items in the article body, or in tables, figures, or supplementary materials, and should prioritize items you feel are most important to your intended audience. The order of items in your manuscript does not need to match the order of items in this checklist. You can decide how best to structure your work.
2 How to cite
Describe how you used STARD at the end of your Methods section, referencing the resources you used e.g.,
‘We used the STARD reporting guideline(1) to draft this manuscript, and the STARD reporting checklist(2) when editing, included in supplement A’
If you use a reporting checklist, remember to include it as a supplement when publishing so that readers can easily find information and see how you have interpreted the guidance.