How journal editors use reporting guidelines

Editors share their experience of setting up policies and procedures to improve transparency and accuracy of research reporting in their journal.

Professor Hywel Williams, Clinical Trials Editor of the Journal of Investigative Dermatology

The Journal of Investigative Dermatology (JID) is the leading scientific dermatology journal with an impact factor of 5.25 in 2008. The JID publishes original research on all aspects of cutaneous biology and skin disease. Although the majority of the journal content deals with basic science, its scope includes clinical research, clinical trials and epidemiology. I have been fortunate in working for the JID for the last 6 years as section editor, with particular responsibilities for clinical trials. In 2006, the JID made a clear announcement that it would welcome high quality clinical trial submissions, providing they were registered prospectively and that they adhered to the CONSORT checklist.

Stating such intent is all very well, but how does a journal like the JID go about implementing CONSORT and compulsory trial registration and how does it check on compliance to CONSORT?

This is how we did it. I first met with the Managing Editor, Elizabeth Blalock. We devised a system whereby submissions referring to clinical trials are reviewed by the JID office for trial registration details and adherence to CONSORT (i.e., submission of CONSORT checklist, flowchart, and appropriate manuscript headings). When the editorial office is in doubt whether the submission is a therapeutic trial (which is not always easy with studies that mainly look at disease mechanisms), the staff sends me the article to check. If it is deemed to be a therapeutic trial, then the editorial office contacts the authors for proof of trial registration and adherence to CONSORT prior to admitting the manuscript for peer review. Only those submissions in compliance with the full CONSORT (and journal) requirements are sent for content and methodological peer review. Otherwise, I send a note to the Editor-in-Chief recommending immediate rejection.

The project has been quite successful with little additional resource implications. The editorial staff (two members) have had to undertake some additional work in screening submissions and querying those they are unsure of, but they have enjoyed the work and they have acquired some new skills in assessing good clinical trial reporting. Most of the work ie explaining where in the document key items have been reported, is pushed back to the authors. The authors are highly motivated to do this as they know that the manuscript will not be processed further until they have met these requirements (which are clearly placed on the journal’s online author instructions). Whilst it is true that the JID publishes few trials, those that are published are of high quality.

On a personal level, the project has been good fun, and has meant little additional work to my role as section editor with responsibilities for clinical trials. In fact, it has made my job of checking manuscripts much easier as I can quickly see where on the manuscript the key CONSORT items are meant to be described. It is clear from some of the submissions that some authors have not heard about trial prospective trial registration. When queried, some authors try and register their trial retrospectively after the analysis has been carried out, which defies the purpose of registration. Such papers are returned by the Editor to the authors without further review. I hope this model of working with journal editorial staff teams can help other journals improve reporting of trials in an efficient way. The key ingredients are (i) a committed journal editor (ii) editorial staff willing to take on new roles and (iii) a section editor with an interest in clinical trials to take responsibility for being arbiter for checks and borderline decisions.

Jason Roberts, PhD is Managing Editor, Headache

Edited by John Rothrock, MD and publishing 10 times a year, Headache is the official publication of the American Headache Society. It receives several hundred manuscripts annually, with roughly 35% of submissions arriving, respectively, from both North America and Europe. Roughly 60% of submissions report clinical trials, multi-patient case series or constitute a systematic overview. The majority of content is of a clinical nature but a small proportion of papers are oriented towards the basic sciences. The journal has an Impact Factor that places it just inside the top-third of titles ranked by ISI within Clinical Neurology. In short, it is a typical mid-sized journal.

During 2008, the Headache editorial office undertook a critique of its publishing output over the previous decade. We found several cases of otherwise good papers that had failed to report research methodologies satisfactorily or omitted crucial information that hindered study replication. While this review of previously published material was ongoing, we also began to experience a surge in the volume of submissions. Nearly one-third of 2008 submissions were from authors new to Headache. Unfortunately, it seemed a sizeable proportion of these new papers contained weak methodological reporting (often to the detriment of otherwise interesting research).

In an effort to elevate the quality of material Headache published, we decided to overhaul our submission and peer review processes. An aggressive stance was taken: we would ask authors to work harder to improve the quality of their reporting. We also renewed a commitment to our authors to burnish potentially interesting papers. To achieve these twin objectives, we decided to mandate the inclusion of a reporting checklist for all submissions. We hoped the guidelines would compel authors to include pertinent methodological details that in turn would deliver a more uniform standard of reporting across manuscripts. Completion of the checklists would then serve to document where critical reporting elements were recorded in a manuscript, which would assist manuscript evaluation. The provision of reporting guidelines would have the dual purpose of making it clear to authors what the minimum threshold was for publication and aid our reviewers/editorial board in enforcing these standards as part of the peer review process. We took to heart the entreaty of Doug Altman, that though responsibility for good reporting rests with authors, journals have a role to play.

Launching an Effort to Collect Reporting Checklists

To institute this new policy, the editorial office was charged with three tasks: establish which checklists could be employed (step 1); devise a method of collection (step 2) and educate our author base of the benefits of utilizing checklists (step 3).

Step 1

Some reporting guidelines were well known to us (CONSORT, STARD) but we needed to explore the full range of options. This led us to EQUATOR. EQUATOR’s site was especially useful during this formative stage because it contained a tremendous depth of material from the checklists themselves through to editorials from journals that had already implemented reporting procedures. In reviewing established guidelines we determined 8 were of importance to us (see table 1).

Step 2

After selecting the reporting guidelines to be used, the next challenge was to establish a method of collection to accommodate our desire to see guideline adherence and checklist completion as a mandatory exercise. Headache uses the Manuscript Central online submission system, but the challenges of collecting forms through the submission site are similar for other products, such as Editorial Manager, Benchpress and EES. A new step was to be inserted into the online submission process. Authors would identify their study type (for example, Randomized Controlled Pharmacotherapy trials, Diagnostic Accuracy Studies, Meta-Analyses of Observational Studies). The submission system would then respond by providing the appropriate checklist for the author to download and complete (all forms are MS Word documents). The author then has to upload the completed form as part of their submission.

The system was re-engineered in such a way that submission was not possible until a checklist file was uploaded. As the Manuscript Central system did not have an inbuilt workflow template to handle our demands at that time, we had to work with the system programmers to have the system configured appropriately.

Step 3

Associated with the launch of the checklist mandate was a program to educate our authors, reviewers and readers. Two editorial board members, upon the launch o f the new reporting policies, published an editorial outlining our official position:

Good reports should contain a clear explanation of the study methods, describe statistical techniques in enough detail to allow verification of the results from original data, report all results, and interpret and present findings in a balanced and forthright way.

Additionally, the journal editorial office has developed a workshop to run at American Headache Society meetings that incorporates instruction on the benefits of utilizing reporting guidelines within a general conversation on how to write, and submit, a manuscript successfully.

Our education efforts aim to ensure authors understand that completion of a reporting checklist is not the important task. Instead, it is ensuring the guidelines are used constructively to shape the construction of an article. There is a sense that some authors are not able to make that distinction and see the checklists as an administrative barrier to submission.

Benefits of Instituting Reporting Guidelines

As only one year has passed since we mandated the reporting checklist requirement, we do not yet have sufficient data to report on the apparent success of the policy beyond anecdotal evidence. Our intentions in launching the policy were to: improve the quality of research reporting amongst the submissions we received; aid reviewers in their evaluation of a manuscript and assist the decision-making process. First and foremost, we wanted to ensure that every critical element involved in data collection, where appropriate, was documented in a manuscript and recorded on a checklist. We were realistic and understood that reporting guidelines per se may not improve the overall quality of the paper, but we contended that forcing authors to record critical information regarding data collection enabled us to better judge the scientific merit of the article. It is still too early to assess if authors recognize the benefits of reporting guidelines. We have been conscious of pitching the imposition of a reporting policy as an aid to improving an author’s submission and not an administrative task.

The consensus amongst editorial board members has been that the checklists are facilitating the peer review process – indeed there is early observational evidence to suggest the checklists themselves are shaping some of the reviews returned as some reviewers structure comments around issues raised in the reporting guidelines. Again, anecdotally, individual editorial board members reported several cases where they felt that following a round of revision, reporting guides had improved several papers by highlighting omissions of important information.

Reporting Guidelines are recommendations to help describe your work clearly

Your research will be used by people from different disciplines and backgrounds for decades to come. Reporting guidelines list the information you should describe so that everyone can understand, replicate, and synthesise your work.

Reporting guidelines do not prescribe how research should be designed or conducted. Rather, they help authors transparently describe what they did, why they did it, and what they found.

Reporting guidelines make writing research easier, and transparent research leads to better patient outcomes.

Easier writing

Following guidance makes writing easier and quicker.

Smoother publishing

Many journals require completed reporting checklists at submission.

Maximum impact

From nobel prizes to null results, articles have more impact when everyone can use them.

Who reads research?

You work will be read by different people, for different reasons, around the world, and for decades to come. Reporting guidelines help you consider all of your potential audiences. For example, your research may be read by researchers from different fields, by clinicians, patients, evidence synthesisers, peer reviewers, or editors. Your readers will need information to understand, to replicate, apply, appraise, synthesise, and use your work.

Cohort studies

A cohort study is an observational study in which a group of people with a particular exposure (e.g. a putative risk factor or protective factor) and a group of people without this exposure are followed over time. The outcomes of the people in the exposed group are compared to the outcomes of the people in the unexposed group to see if the exposure is associated with particular outcomes (e.g. getting cancer or length of life).

Source.

Case-control studies

A case-control study is a research method used in healthcare to investigate potential risk factors for a specific disease. It involves comparing individuals who have been diagnosed with the disease (cases) to those who have not (controls). By analysing the differences between the two groups, researchers can identify factors that may contribute to the development of the disease.

An example would be when researchers conducted a case-control study examining whether exposure to diesel exhaust particles increases the risk of respiratory disease in underground miners. Cases included miners diagnosed with respiratory disease, while controls were miners without respiratory disease. Participants' past occupational exposures to diesel exhaust particles were evaluated to compare exposure rates between cases and controls.

Source.

Cross-sectional studies

A cross-sectional study (also sometimes called a "cross-sectional survey") serves as an observational tool, where researchers capture data from a cohort of participants at a singular point. This approach provides a 'snapshot'— a brief glimpse into the characteristics or outcomes prevalent within a designated population at that precise point in time. The primary aim here is not to track changes or developments over an extended period but to assess and quantify the current situation regarding specific variables or conditions. Such a methodology is instrumental in identifying patterns or correlations among various factors within the population, providing a basis for further, more detailed investigation.

Source

Systematic reviews

A systematic review is a comprehensive approach designed to identify, evaluate, and synthesise all available evidence relevant to a specific research question. In essence, it collects all possible studies related to a given topic and design, and reviews and analyses their results.

The process involves a highly sensitive search strategy to ensure that as much pertinent information as possible is gathered. Once collected, this evidence is often critically appraised to assess its quality and relevance, ensuring that conclusions drawn are based on robust data. Systematic reviews often involve defining inclusion and exclusion criteria, which help to focus the analysis on the most relevant studies, ultimately synthesising the findings into a coherent narrative or statistical synthesis. Some systematic reviews will include a [meta-analysis]{.defined data-bs-toggle="offcanvas" href="#glossaryItemmeta_analyses" aria-controls="offcanvasExample" role="button"}.

Source

Systematic review protocols

TODO

Meta analyses of Observational Studies

TODO

Randomised Trials

A randomised controlled trial (RCT) is a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo. The groups are then followed up to see if there are any differences between the results. This helps in assessing the effectiveness of the intervention.

Source

Randomised Trial Protocols

TODO

Qualitative research

Research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behaviour. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis.

Source

Case Reports

TODO

Diagnostic Test Accuracy Studies

Diagnostic accuracy studies focus on estimating the ability of the test(s) to correctly identify people with a predefined target condition, or the condition of interest (sensitivity) as well as to clearly identify those without the condition (specificity).

Prediction Models

Prediction model research is used to test the accurarcy of a model or test in estimating an outcome value or risk. Most models estimate the probability of the presence of a particular health condition (diagnostic) or whether a particular outcome will occur in the future (prognostic). Prediction models are used to support clinical decision making, such as whether to refer patients for further testing, monitor disease deterioration or treatment effects, or initiate treatment or lifestyle changes. Examples of well known prediction models include EuroSCORE II for cardiac surgery, the Gail model for breast cancer, the Framingham risk score for cardiovascular disease, IMPACT for traumatic brain injury, and FRAX for osteoporotic and hip fractures.

Source

Animal Research

TODO

Quality Improvement in Healthcare

Quality improvement research is about finding out how to improve and make changes in the most effective way. It is about systematically and rigourously exploring "what works" to improve quality in healthcare and the best ways to measure and disseminate this to ensure positive change. Most quality improvement effectiveness research is conducted in hospital settings, is focused on multiple quality improvement interventions, and uses process measures as outcomes. There is a great deal of variation in the research designs used to examine quality improvement effectiveness.

Source

Economic Evaluations in Healthcare

TODO

Meta Analyses

A meta-analysis is a statistical technique that amalgamates data from multiple studies to yield a single estimate of the effect size. This approach enhances precision and offers a more comprehensive understanding by integrating quantitative findings. Central to a meta-analysis is the evaluation of heterogeneity, which examines variations in study outcomes to ensure that differences in populations, interventions, or methodologies do not skew results. Techniques such as meta-regression or subgroup analysis are frequently employed to explore how various factors might influence the outcomes. This method is particularly effective when aiming to quantify the effect size, odds ratio, or risk ratio, providing a clearer numerical estimate that can significantly inform clinical or policy decisions.

How Meta-analyses and Systematic Reviews Work Together

Systematic reviews and meta-analyses function together, each complementing the other to provide a more robust understanding of research evidence. A systematic review meticulously gathers and evaluates all pertinent studies, establishing a solid foundation of qualitative and quantitative data. Within this framework, if the collected data exhibit sufficient homogeneity, a meta-analysis can be performed. This statistical synthesis allows for the integration of quantitative results from individual studies, producing a unified estimate of effect size. Techniques such as meta-regression or subgroup analysis may further refine these findings, elucidating how different variables impact the overall outcome. By combining these methodologies, researchers can achieve both a comprehensive narrative synthesis and a precise quantitative measure, enhancing the reliability and applicability of their conclusions. This integrated approach ensures that the findings are not only well-rounded but also statistically robust, providing greater confidence in the evidence base.

Why Don't All Systematic Reviews Use a Meta-Analysis?

Systematic reviews do not always have meta-analyses, due to variations in the data. For a meta-analysis to be viable, the data from different studies must be sufficiently similar, or homogeneous, in terms of design, population, and interventions. When the data shows significant heterogeneity, meaning there are considerable differences among the studies, combining them could lead to skewed or misleading conclusions. Furthermore, the quality of the included studies is critical; if the studies are of low methodological quality, merging their results could obscure true effects rather than explain them.

Protocol

A plan or set of steps that defines how something will be done. Before carrying out a research study, for example, the research protocol sets out what question is to be answered and how information will be collected and analysed.

Source