Theo Christodoulou, Paula R. Williamson, Susanna Dodd
{"title":"Analysis of opinions from studies citing use of Core Outcome Measures in Effectiveness Trials outcomes taxonomy supports it use in health research: descriptive study","authors":"Theo Christodoulou, Paula R. Williamson, Susanna Dodd","doi":"10.1016/j.jclinepi.2025.111917","DOIUrl":"10.1016/j.jclinepi.2025.111917","url":null,"abstract":"<div><h3>Objectives</h3><div>To support the use of the Core Outcome Measures in Effectiveness Trials taxonomy for health outcomes, its suitability and applicability must be assessed more broadly beyond that of its early piloting phases. Demonstration of its suitability in practice would provide further support for its use in aiding the development of core outcome sets (COS), systematic reviews, and searching in online resources, thereby aiding knowledge dissemination.</div></div><div><h3>Study Design and Setting</h3><div>A citation analysis identified published studies where the taxonomy had been applied. Analysis of these publications aimed to understand the type of publication, clinical area, reason for taxonomy use or adaptation, and any comments made by researchers who had applied the taxonomy.</div></div><div><h3>Results</h3><div>Of 315 papers identified, 200 were sampled and 193 publications relating to 184 projects were analyzed. Nearly one-third (58, 30%) of publications were related to the development of COS and half (98, 51%) were related to the development of reviews (systematic, scoping, and literature). In two-thirds (123, 67%) of the projects the taxonomy was applied for the classification of health outcomes and the vast majority (117, 95%) of these did so without making any changes.</div></div><div><h3>Conclusion</h3><div>This research confirms the taxonomy is sufficiently comprehensive and granular for the classification of all patient outcomes in health research. Its application can highlight a lack of attention being paid toward outcomes most important to patients. We encourage the adoption of this classification system to facilitate evidence searching.</div></div><div><h3>Plain Language Summary</h3><div>Patient health outcomes measure things that happen to patients relating to their health. These outcomes include clinical measures (such as blood pressure), life impact measures (such as effects on physical functioning), use of resources (such as number of hospital appointments), survival (such as how long someone survives after surgery), and harms (such as adverse events following treatment). In 2018, we developed a classification system (called a taxonomy) to help researchers organize the types of outcomes that they are collecting (perhaps when carrying out a clinical trial) or reporting (eg, when combining results from many studies in a systematic review). This taxonomy was designed to help researchers to more clearly present their results, as well as to allow them to search for outcomes online in a more organized manner. We used outcomes taken from many trials and reviews when developing the taxonomy, to make sure that all types of outcomes were covered, but we wanted to make sure that researchers found the taxonomy helpful in practice. This study has been carried out to understand the opinions of researchers who mentioned the taxonomy in their publications, to find out whether they thought any types of outcomes were missing.","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"187 ","pages":"Article 111917"},"PeriodicalIF":5.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144776831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adapting the QuinteT recruitment intervention (QRI) to optimize the recruitment of ethnic minority groups in clinical trials: insights from workshops with diverse public contributors","authors":"Sangeetha Paramasivan , Jhulia Dos Santos , Samira Musse , Zahra Kosar , Shoba Dawson","doi":"10.1016/j.jclinepi.2025.111922","DOIUrl":"10.1016/j.jclinepi.2025.111922","url":null,"abstract":"<div><h3>Background</h3><div>The global majority, often called ethnic minority (EM) groups in the United Kingdom (UK), are underserved in clinical trials despite a greater disease burden. This means that the trial results are often not applicable to the global majority, perpetuating inequities. Despite extensive evidence on barriers to inclusive research, there is little evidence on strategies to achieve successful EM participation. The QuinteT Recruitment Intervention (QRI) has been successfully employed in over 80 trials to optimize recruitment and informed consent in the general population. We aimed to adapt the QRI to optimize EM recruitment in trials through public contributor workshops in the UK.</div></div><div><h3>Methods</h3><div>We conducted five workshops with 43 public contributors from diverse ethnic backgrounds. We explored concerns of interest to contributors and sought their views on adapting three QRI components (audio-recordings of trial discussions and patient interviews and feedback provided to health-care professionals, HCPs) and QRI information sheets and consent forms.</div></div><div><h3>Results</h3><div>Contributors were most interested in discussing barriers to EM research participation (mistrust, inadequate compensation, lack of workforce diversity in research, and inadequate community outreach). Key suggestions for QRI adaptation included: a) offering a copy of the audio-recorded trial consultation, providing patient interview questions in advance and avoiding small print in patient-facing documentation (to foster trust); b) involving EM groups with lived experience of health conditions in training HCPs (to avoid perpetuating harmful stereotypes; ensure training is “with” EM and not “about” EM); c) providing QRI team's expectations of participants in advance (clarity on emotional/mental labor involved); d) discussing participants' expectations of the research team (QRI interviews are not for medical information provision); and e) providing ample reassurance around confidentiality (to avoid identity disclosure to their communities, HCPs, or the government).</div></div><div><h3>Conclusion</h3><div>It is important to initiate community engagement by focusing on key concerns in the community, though this has been previously well studied (eg, barriers to EM research participation). Providing the space for this prior to discussing our research topic of interest fostered trust. This led to contributors' insightful suggestions to ensure QRI adaptation and acceptability to EM groups, with the aim of ensuring their representation in clinical trials.</div></div><div><h3>Plain Language Summary</h3><div>People from ethnic minority (EM) groups are more affected by health conditions than the general population. Yet, they are missing from trials, including those on health conditions affecting them the most (eg, diabetes). Researchers have a good understanding of issues that may prevent EM trial participation (barriers), but there is l","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111922"},"PeriodicalIF":5.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144796062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining multiple imputation with internal model validation in clinical prediction modeling: a systematic methodological review","authors":"Sinclair Awounvo, Meinhard Kieser, Manuel Feißt","doi":"10.1016/j.jclinepi.2025.111916","DOIUrl":"10.1016/j.jclinepi.2025.111916","url":null,"abstract":"<div><h3>Objectives</h3><div>We aim to investigate how multiple imputation (MI) is combined with internal model validation (IMV) in clinical prediction modeling (CPM) studies, with particular emphasis on the challenges of balancing predictive performance, methodological complexity, and resource demands.</div></div><div><h3>Study Design and Setting</h3><div>We searched PubMed, Web of Science, and MathSciNet for “multiple imputation” and “validation” and reviewed all CPM articles published until December 2023. Studies were categorized based on whether MI was performed before IMV (MI-prior-IMV) or during IMV (MI-during-IMV). Moreover, strategy choice was described in terms of key study parameters.</div></div><div><h3>Results</h3><div>Of 683 publications screened, 108 were included in the final analysis. MI-prior-IMV was applied in 85% of them. MI-during-IMV studies had larger sample sizes (2005 vs 1212) and higher missing rates (30% vs 28.5%) than MI-prior-IMV studies. The MI methods used were multiple imputation by chained equations in 77 (92%) and Markov-Chain-Monte-Carlo (MCMC) in 7 (8%) of 84 studies. MI-during-IMV studies exclusively used MICE for MI. MI-during-IMV studies performed fewer imputations compared to MI-prior-IMV studies (10 vs 15). Moreover, MI-during-IMV studies mostly opted for sample-split (SS; 50%) followed by cross-validation (CV; 31%) and bootstrap (BS; 19%) as IMV methods. In contrast, MI-during-IMV studies mostly applied BS (63%) followed by CV (24%) and SS (13%).</div></div><div><h3>Conclusion</h3><div>MI-prior-IMV is predominantly applied over MI-during-IMV, probably due to its relative simplicity regarding comprehension and implementation. MI-during-IMV studies involve larger sample sizes and higher missing rates, potentially explaining their conservativeness regarding runtime and complexity. Future studies should systematically evaluate the complexity-benefits trade-offs of different strategies, offering clearer guidance on optimal strategies for various settings.</div></div><div><h3>Plain Language Summary</h3><div>Missing data are a common challenge in clinical research and can affect the development and validation of prediction models used for patient care. MI is a statistical technique widely used to address missing data by creating multiple complete datasets. IMV methods, such as SS, CV, and bootstrapping, are also essential to ensure the reliability of prediction models when external data are not available. However, there is limited guidance on how to combine these two techniques effectively. This review systematically examined published studies to understand how researchers combine MI and IMV in clinical prediction modeling. We analyzed 108 studies and found that most researchers performed MI before IMV (85% of studies). This approach, referred to as MI-prior-IMV, is simpler and easier to implement. In contrast, a smaller number of studies performed MI-during-IMV, which may provide more accurate results but is mor","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111916"},"PeriodicalIF":5.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144776832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Grey , Alison Avenell , Alan Gaby , Mark J. Bolland
{"title":"Inconsistency in publishers' responses to integrity concerns about published research. Evidence and suggested improvements","authors":"Andrew Grey , Alison Avenell , Alan Gaby , Mark J. Bolland","doi":"10.1016/j.jclinepi.2025.111918","DOIUrl":"10.1016/j.jclinepi.2025.111918","url":null,"abstract":"<div><h3>Objectives</h3><div>To collate, review, and comment upon publishers’ response to integrity concerns.</div></div><div><h3>Study Design and Setting</h3><div>We conducted a narrative review of publications reporting the responses of publishers to concerns about the integrity of research published in their journals. We also drew upon extensive personal experience and a new analysis of publisher responses to integrity concerns about 172 clinical trial publications by a single research group 5 years after the concerns were raised simultaneously with affected publishers.</div></div><div><h3>Results</h3><div>Existing evidence reports that slow, incomplete, and opaque responses from publishers to integrity concerns are common, in both clinical and preclinical disciplines. When we raised very similar concerns about a large set of journal articles simultaneously with publishers, times to resolution varied markedly, and outcomes ranged from no editorial action to all papers retracted.</div></div><div><h3>Conclusion</h3><div>Publishers' responses to notification of concerns about the integrity of publications in their journals are markedly inconsistent, both in their timing and the nature of their editorial decisions. The reasons for these inconsistencies are unknown but could be addressed by a collaborative and transparent process involving publisher integrity staff and academics with expertise in publication integrity. Understanding the reasons for the disparate outcomes is likely to facilitate improvements which will enhance the trustworthiness of the biomedical literature.</div></div><div><h3>Plain Language Summary</h3><div>Existing evidence reports that publishers are slow to assess concerns about the reliability of research publications, and their assessments produce markedly inconsistent outcomes. Our finding of widely disparate outcomes of publisher assessments of overlapping concerns about 172 clinical trials by a single research group reinforces this point. Improving the timeliness, transparency, and systematicity of publisher assessments is likely to enhance the reliability of published research.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111918"},"PeriodicalIF":5.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144796064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial intelligence to semiautomate trustworthiness assessment of randomized controlled trials: response to Au et al","authors":"Hinpetch Daungsupawong, Viroj Wiwanitkit","doi":"10.1016/j.jclinepi.2025.111734","DOIUrl":"10.1016/j.jclinepi.2025.111734","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"184 ","pages":"Article 111734"},"PeriodicalIF":5.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflections on the metascience conference 2025","authors":"Lesley Uttley","doi":"10.1016/j.jclinepi.2025.111912","DOIUrl":"10.1016/j.jclinepi.2025.111912","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"184 ","pages":"Article 111912"},"PeriodicalIF":5.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144867280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clareece R. Nevill , Amin Sharifan , Aoife O'Mahony , Hadiqa Tahir , Will Robinson , Urvi Modha , Lara A. Kahale , Assem M. Khamis , Elie A. Akl , Ellesha A. Smith , Alex J. Sutton , Suzanne C. Freeman , Nicola J. Cooper
{"title":"Capturing the influx of living systematic reviews: a systematic methodological survey","authors":"Clareece R. Nevill , Amin Sharifan , Aoife O'Mahony , Hadiqa Tahir , Will Robinson , Urvi Modha , Lara A. Kahale , Assem M. Khamis , Elie A. Akl , Ellesha A. Smith , Alex J. Sutton , Suzanne C. Freeman , Nicola J. Cooper","doi":"10.1016/j.jclinepi.2025.111904","DOIUrl":"10.1016/j.jclinepi.2025.111904","url":null,"abstract":"<div><h3>Objectives</h3><div>Living systematic reviews (LSRs) are an emerging type of review that continuously updates as new evidence becomes available. A previous methodological survey conducted in 2021 identified and studied all health-based LSRs. Since then, the landscape has changed, including the on-going accumulation of COVID-19 research and availability of automation tools. Furthermore, various methods and guidance exist for conducting LSRs and review authors are often encouraged to explore opportunities to maximize dissemination. We conducted an LSR survey update to describe LSRs in a “post-COVID” era. Our objectives were to summarize the uptake of LSRs, describe their characteristics, including methodological and communicative characteristics, and identify patterns in LSR attributes.</div></div><div><h3>Study Design and Setting</h3><div>We systematically searched for new LSRs and any updates—including updates from LSRs identified previously—published between May 2021 and March 2023 in any health field. Eligible articles were identified and data extracted and combined with data from the original survey. Outcomes broadly included LSR characteristics and uptake, and methodological and communicative characteristics. Analyses were descriptive and included visualizations to explore distributions, combinations, and any time effects of characteristics.</div></div><div><h3>Results</h3><div>A total of 549 records across 168 individual LSRs were identified (of which 92 LSRs were newly detected). Although the presence of COVID-19 LSRs dominated in later years, there was an increased uptake in non–COVID-19 LSRs; the former were found to search the evidence and update/publish results more frequently. Where reported, the approach to conducting updates varied considerably, including a wide range of prespecified frequencies and/or triggers. Of the 337 updates, 25.5% reported on ongoing studies, and among LSRs with published results, 58.5% used the Grading of Recommendations, Assessment, Development and Evaluation system. The proportion of LSRs with a centralized platform for sharing results was higher among (i) those that included updates, (ii) Cochrane reviews, (iii) non–COVID-19 LSRs, and (iv) funded LSRs. Few LSRs included interactive features.</div></div><div><h3>Conclusion</h3><div>The number of LSRs is growing at an accelerating rate, but this survey illustrates that there are still methodological limitations and challenges that carefully need addressing. Key areas for improvement include more explicit prespecified updating strategies and better use of web-based platforms for disseminating results.</div></div><div><h3>Plain Language Summary</h3><div>Every year, a huge amount of health-related research is published and it is difficult for busy doctors and health care workers to keep up to date with all of the new evidence. To help with this, the research can be summarized by carrying out a review. This is known as a “systematic review” if it is car","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111904"},"PeriodicalIF":5.2,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xufei Luo , Bingyi Wang , Qianling Shi , Zijun Wang , Honghao Lai , Hui Liu , Yishan Qin , Fengxian Chen , Xuping Song , Long Ge , Lu Zhang , Zhaoxiang Bian , Yaolong Chen
{"title":"Lack of methodological rigor and limited coverage of generative artificial intelligence in existing artificial intelligence reporting guidelines: a scoping review","authors":"Xufei Luo , Bingyi Wang , Qianling Shi , Zijun Wang , Honghao Lai , Hui Liu , Yishan Qin , Fengxian Chen , Xuping Song , Long Ge , Lu Zhang , Zhaoxiang Bian , Yaolong Chen","doi":"10.1016/j.jclinepi.2025.111903","DOIUrl":"10.1016/j.jclinepi.2025.111903","url":null,"abstract":"<div><h3>Objectives</h3><div>This study aimed to systematically map the development methods, scope, and limitations of existing artificial intelligence (AI) reporting guidelines in medicine and to explore their applicability to generative AI (GAI) tools, such as large language models (LLMs).</div></div><div><h3>Study Design and Setting</h3><div>We reported a scoping review adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews. Five information sources were searched, including MEDLINE (via PubMed), Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, China National Knowledge Infrastructure, FAIRsharing, and Google Scholar, from inception to December 31, 2024. Two reviewers independently screened records and extracted data using a predefined Excel template. Data included guideline characteristics (eg, development methods, target audience, AI domain), adherence to EQUATOR Network recommendations, and consensus methodologies. Discrepancies were resolved by a third reviewer.</div></div><div><h3>Results</h3><div>Sixty-eight AI reporting guidelines were included; 48.5% focused on general AI, whereas only 7.4% addressed GAI/LLMs. Methodological rigor was limited; 39.7% described development processes, 42.6% involved multidisciplinary experts, and 33.8% followed EQUATOR recommendations. Significant overlap existed, particularly in medical imaging (20.6% of guidelines). GAI-specific guidelines (14.7%) lacked comprehensive coverage and methodological transparency.</div></div><div><h3>Conclusion</h3><div>Existing AI reporting guidelines in medicine have suboptimal methodological rigor, redundancy, and insufficient coverage of GAI applications. Future and updated guidelines should prioritize standardized development processes, multidisciplinary collaboration, and expanded focus on emerging AI technologies like LLMs.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111903"},"PeriodicalIF":5.2,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tabea Kaul , Johanna AA. Damen , Anna Lene Seidler , Melina Willson , Ghassan Karam , Demy Idema , Mike Kusters , Mary Ann Dowsett , Tala Ibrahim Hasan Abutahoun , Lotty Hooft , Kylie E. Hunter
{"title":"Improving the reporting and use of trial results in clinical trials registries: global practices, barriers, and recommendations","authors":"Tabea Kaul , Johanna AA. Damen , Anna Lene Seidler , Melina Willson , Ghassan Karam , Demy Idema , Mike Kusters , Mary Ann Dowsett , Tala Ibrahim Hasan Abutahoun , Lotty Hooft , Kylie E. Hunter","doi":"10.1016/j.jclinepi.2025.111901","DOIUrl":"10.1016/j.jclinepi.2025.111901","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Recent initiatives promoted results reporting on clinical trials registries to improve transparency and reduce publication bias. However, local reports suggest that results reporting on registries is often inadequate, limiting their usefulness for evidence synthesis. We aimed to 1) provide an overview of results reporting practices across clinical trials registries globally, 2) identify barriers and facilitators to reporting and using results from trials registries, and 3) develop recommendations to improve reporting and usability of results in trials registries.</div></div><div><h3>Study Design and Setting</h3><div>Three-part mixed methods study. Part 1: Descriptive analysis of results reporting practices for randomized controlled trials (RCTs) starting between 2010 and 2022 across six trials registries (one from each World Health Organization region), with an in-depth analysis focusing on reporting formats and accessibility. Part 2: Two separate online surveys targeting trial registrants and evidence users. Part 3: Discussion among author group to generate recommendations.</div></div><div><h3>Results</h3><div>Part 1: Our sample included 201,265 RCTs, with 17% (33,163 trials) reporting some form of results on a registry. A subset showed 63% of posted results accessible in the registry record, with 64% to 98% of results data available in a reusable format. Part 2: 86% (194/225) of registrants were aware of registry results reporting possibilities, but time, effort, and fear of publishing interference were barriers. For evidence users, 51% (36/70) had used registry results, with barriers including mistrust of non–peer-reviewed data and difficulty locating results. Part 3: Recommendations include standardizing registry interfaces, addressing misconceptions, and fostering trust in registry-reported results.</div></div><div><h3>Conclusion</h3><div>Results reporting practices on registries are increasing. Improving these requires better infrastructure, policies, training, and funding. With adequate support, registries can become essential for transparent and efficient evidence dissemination, enhancing research quality, and reducing duplication.</div></div><div><h3>Plain Language Summary</h3><div>Clinical trials registries are online databases where researchers register medical studies and share their status, details, and results. These registries exist globally and allow researchers to track ongoing studies and emerging evidence on a topic. They also enable the public to identify trials they may be interested to participate in. Although regulations require researchers to share study results on registries within a year of study completion, only a fraction of results are currently available. Our project 1) evaluated the reporting of study results across different trials registries, 2) surveyed individuals involved in reporting and using study findings from registries, and 3) developed recommendations to improv","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111901"},"PeriodicalIF":5.2,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiao Huang , Rong Peng , Rui-Qing Cai , Yong-Bo Wang , Si-Yu Yan , Xiang-Ying Ren , Xian-Tao Zeng , Ying-Hui Jin
{"title":"Assessment of traditional and novel effect measures for time-to-event endpoints: a meta-epidemiological study of published oncological trials","authors":"Qiao Huang , Rong Peng , Rui-Qing Cai , Yong-Bo Wang , Si-Yu Yan , Xiang-Ying Ren , Xian-Tao Zeng , Ying-Hui Jin","doi":"10.1016/j.jclinepi.2025.111900","DOIUrl":"10.1016/j.jclinepi.2025.111900","url":null,"abstract":"<div><h3>Objectives</h3><div>Time-to-event endpoints are essential for evaluating treatment efficacy in oncology trials. The hazard ratio, although commonly used, captures only the relative effect and may not suffice in diverse clinical contexts. Traditional measures based on incidence rates and restricted mean survival time, along with novel measures based on average hazard (AH), offer both relative (ratio-based) and absolute (difference-based) perspectives. However, these measures have not been systematically evaluated in real-world oncology trials, limiting their practical application.</div></div><div><h3>Methods</h3><div>This meta-epidemiological study analyzed individual patient data reconstructed from Kaplan–Meier curves of 46 randomized controlled oncology trials published in five high-impact journals, involving 35,994 patients and 52 curves. The reconstruction used a validated algorithm to build patient-level data from published curves. Seven effect measures were evaluated: hazard ratio, incidence rate ratio and difference, restricted mean survival time ratio and difference, and AH ratio and difference. Pairwise concordance among these measures was assessed using visualizations, median differences, Spearman rank correlation, and intraclass correlation coefficients.</div></div><div><h3>Results</h3><div>There was a high agreement in clinical direction and statistical significance among the 7 measures. Among the four ratio-based (relative) measures, hazard ratio, incidence rate ratio, and ratio of AH demonstrated high agreement, with median differences ≤0.004, correlations >0.96, and intraclass correlation coefficients >0.87. The restricted mean survival time ratio showed substantial inconsistency, with other relative measures being approximately 1.25 times higher. The nonproportionality of hazards further increased this ratio to 1.38. Among the three difference-based (absolute) measures, incidence rate difference and AH difference were closely aligned, whereas the restricted mean survival time difference exhibited substantial variability, with median ratios of treatment effect magnitude exceeding 300.</div></div><div><h3>Conclusion</h3><div>AH–based measures provide promising alternatives to the hazard ratio by incorporating both relative and absolute perspectives. The restricted mean survival time based measures offer clinically relevant insights but exhibited substantial differences in magnitude; therefore, they should be interpreted independently and not directly compared with other measures. Incidence rate–based measures may serve as practical approximations when other metrics are unavailable. Routine reporting of multiple effect measures in oncology trials can enhance clinical interpretation and support more nuanced, evidence-based decision-making.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"186 ","pages":"Article 111900"},"PeriodicalIF":5.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144621116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}