{"title":"Comment on “Improving the reporting and use of trial results in clinical trials registries: global practices, barriers, and recommendations”","authors":"Hamza Khan","doi":"10.1016/j.jclinepi.2025.111975","DOIUrl":"10.1016/j.jclinepi.2025.111975","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"188 ","pages":"Article 111975"},"PeriodicalIF":5.2,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145071121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anas El Zouhbi, Lea Assaf, Gladys Honein-AbouHaidar, Joanne Khabsa, Elie A Akl
{"title":"Virtual versus in-person meetings for practice guideline panels: A qualitative study.","authors":"Anas El Zouhbi, Lea Assaf, Gladys Honein-AbouHaidar, Joanne Khabsa, Elie A Akl","doi":"10.1016/j.jclinepi.2025.111974","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2025.111974","url":null,"abstract":"<p><strong>Introduction: </strong>Traditionally, practice guidelines panel meetings were conducted in-person. During the COVID-19 pandemic, meetings transitioned to the virtual format. While guideline developers appreciated the increased flexibility and reduced expenses, they were concerned about reduced engagement and networking possibilities.</p><p><strong>Objectives: </strong>To understand the experiences with virtual and in-person panel meeting formats, and to explore their views on the relative advantages, disadvantages, and impact on recommendation quality.</p><p><strong>Methods: </strong>We interviewed individuals from different 'interest-holder' groups who have participated in both the in-person and virtual formats of panel meetings. These included panelists, chairs, staff of a guideline-developing organization, guideline methodologists, and systematic reviewers. We recruited participants until data saturation was reached. We used Quirkos for data analysis in accordance with Braun and Clarke's principles for effectively identifying and reporting emerging themes.</p><p><strong>Results: </strong>We reached data saturation after interviewing 16 individuals with diverse career backgrounds and roles in guideline development. Six major themes were generated from the interviews. Four themes relate to the comparison between the virtual and the in-person formats: resources and logistics, engagement, impact on recommendations, and optimizing virtual meetings. The remaining two themes related to the hybrid format, and mixing formats. While the virtual format was favored in relation to less resource use and environmental friendliness, the logistics of online connectivity were a concern. The in-person format allowed better engagement in terms of discussion and informal interactions. Despite the risk of lower participation from key members in virtual meetings, there were no concerns about the impact of either format on the quality of the guideline. Online tools (e.g., online chatting, virtual hand raising, polling, screen sharing, virtual break out rooms, and recording capabilities) can enhance the efficiency of not only virtual meetings, but also in-person meetings. Participants had varying but generally negative views on hybrid meetings but favored mixing formats, typically starting with an in-person meeting.</p><p><strong>Conclusion: </strong>Participants in our study typically preferred the in-person format over the virtual format and did not favor the hybrid format. Mixing formats and use of online tools even for in-person meeting can create efficiencies. We build on the findings to propose an approach for deciding on the format of the guideline panel meeting.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111974"},"PeriodicalIF":5.2,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elias R. Zehnder , Christof Manuel Schönenberger , Julia Hüllstrung , Mona Elalfy , Beverley Nickolls , Frédérique Chammartin , David Hans-Ulrich Haerry , Ellen Cart-Richter , David Jackson-Perry , Samuel Aggeler , Julian Steinmann , Sandra E. Chaudron , Katharina Kusejko , Marcel Stoeckle , Alexandra Calmy , Matthias Cavassini , Enos Bernasconi , Dominique Braun , Johannes Nemeth , Irene Abela , Matthias Briel
{"title":"Implementing a randomization consent to enable Trials within Cohorts in the Swiss HIV Cohort Study – A mixed-methods study","authors":"Elias R. Zehnder , Christof Manuel Schönenberger , Julia Hüllstrung , Mona Elalfy , Beverley Nickolls , Frédérique Chammartin , David Hans-Ulrich Haerry , Ellen Cart-Richter , David Jackson-Perry , Samuel Aggeler , Julian Steinmann , Sandra E. Chaudron , Katharina Kusejko , Marcel Stoeckle , Alexandra Calmy , Matthias Cavassini , Enos Bernasconi , Dominique Braun , Johannes Nemeth , Irene Abela , Matthias Briel","doi":"10.1016/j.jclinepi.2025.111973","DOIUrl":"10.1016/j.jclinepi.2025.111973","url":null,"abstract":"<div><h3>Objectives</h3><div>Trials within Cohorts (TwiCs) is a promising design to make randomized trials more efficient. Cohort participants are asked for consent to be randomized into future low-risk interventions tested within the cohort. To enable TwiCs in the Swiss HIV Cohort Study, we added this “randomization consent” to the protocol and approached cohort participants subsequently for written consent. This study describes the TwiCs implementation process.</div></div><div><h3>Study Design and Setting</h3><div>We used a mixed methods design to evaluate the implementation process. We used cohort data to characterize participants accepting and declining randomization consent. We conducted a cross-sectional survey with cohort physicians to gather opinions and experiences regarding the TwiCs design. We did semistructured interviews with involved stakeholders (physicians, research personnel, participants, and ethics committee members) to get insights about attitudes, barriers, and facilitators implementing the randomization consent. In addition, we performed observations in cohort visits where the randomization consent was offered.</div></div><div><h3>Results</h3><div>Between July 2024 and July 2025, among 5297 cohort participants approached, 3067 (57.9%) accepted and 734 (13.8%) declined the randomization consent. In 1496 (28.2%) cases the decision was postponed to the next visit. Male sex, younger age, higher education, being consulted by a steady physician for at least three visits, and shorter cohort participation time showed higher acceptance rates. Interviewed participants cited fear of additional effort and a lack of interest in research as reasons for declining consent. The overall perception of TwiCs among cohort physicians and research personnel was positive. They recognized the potential to simplify the conduct of trials, especially to test low-risk interventions. Ethical concerns on the TwiCs consent procedure were rare. However, an explicit randomization consent was considered necessary by members of ethical committees while several physicians and participants felt positive about randomizing without explicit consent. The roll-out of the randomization consent was facilitated by well-trained, motivated personnel, and seamless integration into clinical routine. Main barriers for physicians in the consenting process were language barriers, participant difficulty understanding the concept, and time constraints due to tight consultation schedules.</div></div><div><h3>Conclusion</h3><div>The implementation of the TwiCs design, including the roll-out of a randomization consent in an existing, large-scale cohort, is feasible. The acceptance rate among participants was high.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"188 ","pages":"Article 111973"},"PeriodicalIF":5.2,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lang Lang , Leah H. Rubin , Beau M. Ances , Aggrey Anok , Sarah Cooley , Raha M. Dastgheyb , Rebecca E. Easter , Donald R. Franklin Jr. , Robert K. Heaton , Scott L. Letendre , Gertrude Nakijozi , Thomas Marcotte , Robert Paul , Eran F. Shorer , Stephan Tomusange , David E. Vance , Yanxun Xu
{"title":"International application of an optimized harmonization approach for longitudinal cognitive data in people with HIV","authors":"Lang Lang , Leah H. Rubin , Beau M. Ances , Aggrey Anok , Sarah Cooley , Raha M. Dastgheyb , Rebecca E. Easter , Donald R. Franklin Jr. , Robert K. Heaton , Scott L. Letendre , Gertrude Nakijozi , Thomas Marcotte , Robert Paul , Eran F. Shorer , Stephan Tomusange , David E. Vance , Yanxun Xu","doi":"10.1016/j.jclinepi.2025.111972","DOIUrl":"10.1016/j.jclinepi.2025.111972","url":null,"abstract":"<div><h3>Objectives</h3><div>We previously developed a refined longitudinal data harmonization method to address the challenge of nonoverlapping cognitive tests across cohorts, successfully harmonizing data from 5 large-scale US HIV studies. Building on this harmonized data set, we now aim to apply this method to an additional US HIV study and cognitive data from HIV studies in China, India, and Uganda. This effort will result in a more comprehensive data set with a larger, internationally diverse sample that includes both people with HIV and people without HIV.</div></div><div><h3>Study Design and Setting</h3><div>The new cohorts to be harmonized included cognitive tests that did not fully overlap across studies, a challenge for traditional harmonization methods. We applied our refined approach, designed for scenarios without direct test linkage. In the Uganda cohort, where a key method assumption was violated, we implemented targeted adjustments.</div></div><div><h3>Results</h3><div>The harmonized cognitive domain scores were consistent across cohorts and strongly correlated with raw or log-transformed cognitive test data (eg, timed outcomes). These scores preserved key patterns of variation observed in the raw data for key demographics—such as age, education, and race—and maintained age-related longitudinal trajectories of cognitive performance derived from all participants’ visits.</div></div><div><h3>Conclusion</h3><div>The resulting harmonized data set includes 18,270 participants across multiple countries, significantly enhancing its diversity and utility. It lays the groundwork for developing normative data and conducting more robust analyses to address critical neuro-HIV research questions. This study also demonstrates the adaptability of the refined harmonization method in integrating new data and accommodating methodological challenges.</div></div><div><h3>Plain Language Summary</h3><div>People with HIV (PWH) often face a variety of cognitive challenges, but these issues can look different for each person. As different studies use different tests to measure cognitive abilities, it is difficult to combine the results from multiple studies and draw clear conclusions. In our previous work, we developed a refined method to harmonize data from 5 large US-based HIV neuro studies. Such method could handle the scenarios where nonoverlapping cognitive tests are used in certain domains across different studies. We now aim to include additional cohorts from the United States, China, India, and Uganda. Because these new cohorts also use nonoverlapping cognitive tests in certain domains, we applied our developed approach to harmonize the new data into our previously harmonized data. Our refined method created “harmonized scores” for cognitive abilities that closely matched the original test results. These scores captured differences related to age, education, and other factors while preserving how each person's cognitive abilities changed over time","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"188 ","pages":"Article 111972"},"PeriodicalIF":5.2,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145058759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryan McChrystal , Peter Hanlon , Jennifer S. Lees , David M. Phillippo , Nicky J. Welton , Katie Gillies , David McAllister
{"title":"Modeling rates of trial attrition: an analysis of individual participant data from 90 randomized controlled trials of pharmacological interventions for multiple conditions","authors":"Ryan McChrystal , Peter Hanlon , Jennifer S. Lees , David M. Phillippo , Nicky J. Welton , Katie Gillies , David McAllister","doi":"10.1016/j.jclinepi.2025.111971","DOIUrl":"10.1016/j.jclinepi.2025.111971","url":null,"abstract":"<div><h3>Background</h3><div>Trial attrition threatens the validity of randomized controlled trials (hereafter trials) and has implications for trial design, conduct, and analysis. Few studies have examined how attrition rates change over follow-up or the types of attrition reported. Therefore, we estimated attrition rates using individual participant data for a range of conditions.</div></div><div><h3>Methods</h3><div>We obtained the number of days participants spent in trials, completion status, and reported reasons for noncompletion. For consistency with the clinicaltrials.gov reporting guidelines, we categorized attrition into adverse event, lack of efficacy, lost to follow-up, principal investigator/sponsor decision, protocol violation, voluntary withdrawal, and other. For each trial, we estimated the cumulative incidence of attrition and fitted six parametric time-to-event models (exponential, generalized gamma, Gompertz, log-logistic, log-normal, and Weibull). Goodness of fit was evaluated graphically and using the Akaike Information Criterion (AIC). Attrition rates were obtained for each trial as instantaneous risk (ie, hazard rates) from the best-fitting model.</div></div><div><h3>Results</h3><div>We included 90 trials (86,107 participants): type 2 diabetes (45.6%), chronic obstructive pulmonary disease (22.2%), and eight other conditions (32.2%). Attrition occurred for 14,572 (16.9%) participants, ranging from 3.4% to 43.7% among trials. Adverse event (43.5%) and voluntary withdrawal (24.1%) were the commonest categories of attrition. Gompertz and log-normal time-to-event models were the most frequent best-fitting models. Hazard rates typically peaked near the beginning of trials and decreased thereafter.</div></div><div><h3>Conclusion</h3><div>Attrition rates were generally highest near the beginning of trials, decreased thereafter, and were well-described by Gompertz and log-normal time-to-event models. These findings can inform the design, conduct, and analysis of clinical trials.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"187 ","pages":"Article 111971"},"PeriodicalIF":5.2,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145024737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Barbara Nussbaumer-Streit , Andrew Booth , Chantelle Garritty , Candyce Hamel , Zachary Munn , Andrea C. Tricco , Danielle Pollock
{"title":"Overview of evidence synthesis types and modes","authors":"Barbara Nussbaumer-Streit , Andrew Booth , Chantelle Garritty , Candyce Hamel , Zachary Munn , Andrea C. Tricco , Danielle Pollock","doi":"10.1016/j.jclinepi.2025.111970","DOIUrl":"10.1016/j.jclinepi.2025.111970","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Evidence syntheses systematically compile and analyze information from multiple sources to support health-care decision-making. As many different types of questions need to be answered in health care, different evidence synthesis types have emerged. In this article, we introduce the most common types of evidence synthesis.</div></div><div><h3>Study Design and Setting</h3><div>We discuss the aims, key methodological features, and illustrative examples of different evidence synthesis types and modes, drawing on our work with the Evidence Synthesis Taxonomy Initiative (ESTI).</div></div><div><h3>Results</h3><div>Evidence synthesis types include systematic reviews, qualitative evidence syntheses, mixed methods reviews, overviews of reviews, and ‘big picture reviews’ (scoping reviews, mapping reviews, and evidence gap maps). Additionally, we focus on rapid and living reviews as modes and how they can be applied to different evidence synthesis types.</div></div><div><h3>Conclusion</h3><div>It is essential to understand the main types of evidence synthesis to choose the most suitable method for addressing a specific health-related question.</div></div><div><h3>Plain Language Summary</h3><div>Health-care decisions should be based on the best available evidence. To bring together findings from many studies, researchers use evidence synthesis-structured methods that summarize what is known on a topic. Because health questions differ, various types of evidence syntheses exist, each designed for specific needs. This article explains the aims and characteristics of the most common types of evidence synthesis: systematic reviews, overviews of reviews, qualitative evidence syntheses, mixed methods reviews, and ‘big picture reviews’ (scoping reviews, mapping reviews, and evidence gap maps). We also describe two ways evidence syntheses can be carried out: rapid reviews (done quickly to support urgent decisions) and living reviews (regularly updated as new evidence becomes available). Understanding the different approaches helps clinicians, patients, and policymakers select the right type of review for their health questions. This ensures that decisions are guided by evidence that is both reliable and appropriate for the situation.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"187 ","pages":"Article 111970"},"PeriodicalIF":5.2,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145008545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Living guidelines have come of age: new insights and methods-an open call for contributions.","authors":"David J Tunnicliffe, Heath White, Tari Turner","doi":"10.1016/j.jclinepi.2025.111968","DOIUrl":"10.1016/j.jclinepi.2025.111968","url":null,"abstract":"<p><strong>Objectives: </strong>Guideline developers have long recognized the importance of maintaining up-to-date guidelines to support evidence-based practice and policy, contributing to narrowing the gap between research generation and its application. This commentary reflects on key insights from the Journal of Clinical Epidemiology's Methods for Living Guidelines series and issues an open call for contributions aimed at advancing the development, implementation, and evaluation of living guideline methods.</p><p><strong>Methods: </strong>This commentary synthesizes methodological innovations and practice experiences shared in the Methods for Living Guidelines series, highlighting emerging practices and lessons learned.</p><p><strong>Results: </strong>Although the practice of continuously updated guidance predates its formal naming, the COVID-19 pandemic brought living guidelines to the forefront, accelerating their adoption and methodological innovation. This saw methodological advances, including the integration of rapid evidence synthesis, dynamic updating protocols, and stakeholder engagement strategies, which collectively enhanced the responsiveness and relevance of guideline development.</p><p><strong>Conclusion: </strong>Living guidelines offer a flexible and adaptive framework that aligns with the pace of emerging evidence and evolving clinical needs. Their successful implementation depends on sustained investment in methodological rigor and collaborative networks. The Journal of Clinical Epidemiology's series highlights the importance shared learning and transparency in refining these approaches. This commentary calls upon researchers and guideline methodologists to contribute to the ongoing advancement of living guidelines methods, ensuring their reliability, relevance and impact in addressing global challenges.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111968"},"PeriodicalIF":5.2,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145006820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pragmatic evidence and the nature of randomized trials","authors":"Perrine Janiaud , Lars G. Hemkens","doi":"10.1016/j.jclinepi.2025.111961","DOIUrl":"10.1016/j.jclinepi.2025.111961","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Pragmatic trials are increasingly gaining recognition. However, what pragmatic trials are is frequently misunderstood. They are frequently described superficially by their manifestation and surface only, as studies conducted in “real world” settings, having wide inclusion criteria, and less complicated study procedures. However, these features are neither necessary nor defining characteristics. They also do not guarantee that trials sharing them are useful to inform medical practice. There is a danger of losing sight of the essence of the powerful pragmatic approach.</div></div><div><h3>Methods, Results, and Conclusion</h3><div>Here we describe the key elements of the pragmatic approach and the close relationship with the original nature of randomized trials. Our aim is to refocus teaching, research and interpretation of evidence, not as a novel approach but as a return towards the essence of pragmatic evidence and the nature of randomized trials. We first go back to the origin of pragmatism in philosophy and its introduction in medicine and revisit the nature of randomized trials in their pure form. We highlight the critical distinction between assessing treatment decisions and understanding the mechanisms of these decisions. We show why the current view on randomized trials in medicine has lost a pragmatic focus, with the explanatory design features blinding and adherence control often seen as defining characteristics or quality criteria of randomized trials. We then highlight common misunderstandings of pragmatic trials and conclude with an overview of their key features to provide pragmatic evidence.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"187 ","pages":"Article 111961"},"PeriodicalIF":5.2,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Ma, Angela MacCarthy, Shona Kirtley, Patricia Logullo, Paula Dhiman , Gary S. Collins
{"title":"Peer review of prediction model studies in oncology needs improvement: A systematic review of open peer review reports from BMC journals","authors":"Jie Ma, Angela MacCarthy, Shona Kirtley, Patricia Logullo, Paula Dhiman , Gary S. Collins","doi":"10.1016/j.jclinepi.2025.111967","DOIUrl":"10.1016/j.jclinepi.2025.111967","url":null,"abstract":"<div><h3>Objectives</h3><div>To evaluate the completeness and quality of open peer review reports from BioMed Central (BMC) journals for regression-based clinical prediction model studies in oncology, focusing on adherence to methodological standards, reporting guidelines, and constructive feedback.</div></div><div><h3>Methods</h3><div>We searched for published prediction model studies in the field of oncology, which were published in BioMed Central journals in 2021. Data extraction used the Assessment of review Reports with a Checklist Available to eDItors and Authors (ARCADIA) checklist (13-item tool assessing review quality) with additional criteria (eg, word count, focus of comments on manuscript sections). Two investigators independently evaluated all open peer reviews, with conflicts resolved involving a third researcher. Descriptive statistics and narrative synthesis were applied.</div></div><div><h3>Results</h3><div>Peer reviews were brief (median: 243 words; range: 0–677), with 82.7% focusing on methods or results but rarely addressing limitations (<20%) or generalizability. No reviewers verified adherence to reporting guidelines (eg, TRIPOD); only one reviewer mentioned guideline use. Reviews prioritized superficial issues (67.3% focused on presentation) over methodological rigor (38.5% evaluated statistical methods). There are 19.2% suggested statistical revisions and <1% addressed protocol deviations or data availability.</div></div><div><h3>Conclusion</h3><div>Our findings show that peer reviews of prediction models lack depth, methodological scrutiny, and enforcement of reporting standards. This risks clinical harm from biased models and perpetuates research waste. Reforms are urgently needed, including implementing reporting guidelines (eg, TRIPOD+AI), mandatory reviewer training, and recognition of peer review as scholarly labor. Journals must prioritize methodological rigor in reviews to ensure reliable prediction models and safeguard patient care.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"188 ","pages":"Article 111967"},"PeriodicalIF":5.2,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}