Rachel E Christensen, Michael D Yi, Bianca Y Kang, Sarah A Ibrahim, Noor Anvery, McKenzie Dirr, Stephanie Adams, Yasser S Amer, Alexandre Bisdorff, Lisa Bradfield, Steve Brown, Amy Earley, Lisa A Fatheree, Pierre Fayoux, Thomas Getchius, Pamela Ginex, Amanda Graham, Courtney R Green, Paolo Gresele, Helen Hanson, Norrisa Haynes, Laszlo Hegedüs, Heba Hussein, Priya Jakhmola, Lucia Kantorova, Rathika Krishnasamy, Alex Krist, Gregory Landry, Erika D Lease, Luis Ley, Gemma Marsden, Tim Meek, Martin Meremikwu, Carmen Moga, Saphia Mokrane, Amol Mujoomdar, Skye Newton, Norma O'Flynn, Gavin D Perkins, Emma-Jane Smith, Chatura Prematunge, Jenna Rychert, Mindy Saraco, Holger J Schünemann, Emily Senerth, Alan Sinclair, James Shwayder, Carla Stec, Suzana Tanni, Nichole Taske, Robyn L Temple-Smolkin, Louise Thomas, Sherene Thomas, Britt Tonnessen, Amy S Turner, Anne Van Dam, Mitchell van Doormaal, Yung Liang Wan, Christina B Ventura, Emma McFarlane, Rebecca L Morgan, Toju Ogunremi, Murad Alam
{"title":"Corrigendum to \"Development of an international glossary for clinical guidelines collaboration\" \"[Journal of Biomechanics 158 (2023) 84-91]\".","authors":"Rachel E Christensen, Michael D Yi, Bianca Y Kang, Sarah A Ibrahim, Noor Anvery, McKenzie Dirr, Stephanie Adams, Yasser S Amer, Alexandre Bisdorff, Lisa Bradfield, Steve Brown, Amy Earley, Lisa A Fatheree, Pierre Fayoux, Thomas Getchius, Pamela Ginex, Amanda Graham, Courtney R Green, Paolo Gresele, Helen Hanson, Norrisa Haynes, Laszlo Hegedüs, Heba Hussein, Priya Jakhmola, Lucia Kantorova, Rathika Krishnasamy, Alex Krist, Gregory Landry, Erika D Lease, Luis Ley, Gemma Marsden, Tim Meek, Martin Meremikwu, Carmen Moga, Saphia Mokrane, Amol Mujoomdar, Skye Newton, Norma O'Flynn, Gavin D Perkins, Emma-Jane Smith, Chatura Prematunge, Jenna Rychert, Mindy Saraco, Holger J Schünemann, Emily Senerth, Alan Sinclair, James Shwayder, Carla Stec, Suzana Tanni, Nichole Taske, Robyn L Temple-Smolkin, Louise Thomas, Sherene Thomas, Britt Tonnessen, Amy S Turner, Anne Van Dam, Mitchell van Doormaal, Yung Liang Wan, Christina B Ventura, Emma McFarlane, Rebecca L Morgan, Toju Ogunremi, Murad Alam","doi":"10.1016/j.jclinepi.2024.111514","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111514","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111514"},"PeriodicalIF":7.3,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moriasi Nyanchoka , Omolola Titilayo Alade , Jennifer Petkovic , Tiffany Duque , L. Susan Wieland
{"title":"A review of health equity considerations in cochrane reviews of lifestyle interventions for cardiovascular health in adults","authors":"Moriasi Nyanchoka , Omolola Titilayo Alade , Jennifer Petkovic , Tiffany Duque , L. Susan Wieland","doi":"10.1016/j.jclinepi.2024.111546","DOIUrl":"10.1016/j.jclinepi.2024.111546","url":null,"abstract":"<div><h3>Objectives</h3><div>Cardiovascular disease (CVD) is the leading cause of global disease burden and rising health-care costs. Systematic reviews (SRs) rigorously evaluate evidence on health interventions' effects and guide personal, clinical, and policy decision-making. Health equity is the absence of avoidable and unfair differences in health between groups within a population. Assessing equity in lifestyle interventions for cardiovascular health is important due to persisting health inequities in CVD burden and access to interventions. We aim to explore how health equity considerations are addressed in Cochrane SRs of lifestyle interventions for cardiovascular health.</div></div><div><h3>Study Design and Setting</h3><div>This is a methodological review of Cochrane SRs of lifestyle interventions for cardiovascular health using the PROGRESS-Plus framework. PROGRESS-Plus stands for Place of residence, Race/ethnicity/culture/language, Occupation, Gender/sex, Religion, Education, Socioeconomic status, and Social capital, while “Plus” stands for additional factors associated with discrimination and exclusion such as age, disability, and comorbidity. Using predefined selection criteria, two authors independently screened all Cochrane reviews published in the Cochrane Database of Systematic Reviews (CDSR) between August 2017 and December 2022. PROGRESS-Plus factors in the SRs were sought in the Summary of Findings (SoF) table, Methods/Inclusion criteria, Methods/Subgroup analyses, Results/Included studies, Results/Subgroup analyses, and Discussion/Overall completeness and applicability of evidence.</div></div><div><h3>Results</h3><div>We included 36 SRs published by 10 Cochrane groups, addressing 11 health conditions with mostly dietary and exercise interventions. The most common PROGRESS-Plus factors assessed were gender/sex, age, and comorbidity. PROGRESS-Plus factors were most addressed in the inclusion criteria (64%), the discussion (75%), and the included studies (92%) sections of the SRs. Only 33% of SoF tables referenced PROGRESS-Plus. Sixty-nine percent of the included SRs planned for subgroup analyses across one or more PROGRESS-Plus factors, but only 43% of SRs conducted subgroup analyses, suggesting limited reporting of PROGRESS-Plus factors in primary studies.</div></div><div><h3>Conclusion</h3><div>Equity factors are not sufficiently addressed in Cochrane reviews of lifestyle interventions for cardiovascular health. Low reporting of PROGRESS-Plus factors in implications for practice and research sections of Cochrane SRs limit equity-focused guidance for current clinical practice, public health interventions, and future research.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111546"},"PeriodicalIF":7.3,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The performance of prognostic models depended on the choice of missing value imputation algorithm: a simulation study","authors":"Manja Deforth , Georg Heinze , Ulrike Held","doi":"10.1016/j.jclinepi.2024.111539","DOIUrl":"10.1016/j.jclinepi.2024.111539","url":null,"abstract":"<div><h3>Objectives</h3><div>The development of clinical prediction models is often impeded by the occurrence of missing values in the predictors. Various methods for imputing missing values before modeling have been proposed. Some of them are based on variants of multiple imputations by chained equations, while others are based on single imputation. These methods may include elements of flexible modeling or machine learning algorithms, and for some of them user-friendly software packages are available. The aim of this study was to investigate by simulation if some of these methods consistently outperform others in performance measures of clinical prediction models.</div></div><div><h3>Study Design and Setting</h3><div>We simulated development and validation cohorts by mimicking observed distributions of predictors and outcome variable of a real data set. In the development cohorts, missing predictor values were created in 36 scenarios defined by the missingness mechanism and proportion of noncomplete cases. We applied three imputation algorithms that were available in R software (R Foundation for Statistical Computing, Vienna, Austria): mice, aregImpute, and missForest. These algorithms differed in their use of linear or flexible models, or random forests, the way of sampling from the predictive posterior distribution, and the generation of a single or multiple imputed data set. For multiple imputation, we also investigated the impact of the number of imputations. Logistic regression models were fitted with the simulated development cohorts before (full data analysis) and after missing value generation (complete case analysis), and with the imputed data. Prognostic model performance was measured by the scaled Brier score, <em>c</em>-statistic, calibration intercept and slope, and by the mean absolute prediction error evaluated in validation cohorts without missing values. Performance of full data analysis was considered as ideal.</div></div><div><h3>Results</h3><div>None of the imputation methods achieved the model's predictive accuracy that would be obtained in case of no missingness. In general, complete case analysis yielded the worst performance, and deviation from ideal performance increased with increasing percentage of missingness and decreasing sample size. Across all scenarios and performance measures, aregImpute and mice, both with 100 imputations, resulted in highest predictive accuracy. Surprisingly, aregImpute outperformed full data analysis in achieving calibration slopes very close to one across all scenarios and outcome models. The increase of mice's performance with 100 compared to five imputations was only marginal. The differences between the imputation methods decreased with increasing sample sizes and decreasing proportion of noncomplete cases.</div></div><div><h3>Conclusion</h3><div>In our simulation study, model calibration was more affected by the choice of the imputation method than model discrimination. While difference","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111539"},"PeriodicalIF":7.3,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hassan Murad , Rebecca L. Morgan , Yngve Falck-Ytter , Reem A. Mustafa , Shahnaz Sultan , Philipp Dahm , Madelin R. Siedler , Osama Altayar , Perica Davitkov , Syed Arsalan Ahmed Naqvi , Irbaz Bin Riaz , Zhen Wang , Lifeng Lin
{"title":"Simultaneous evaluation of the imprecision and inconsistency domains of GRADE can be performed using prediction intervals","authors":"M. Hassan Murad , Rebecca L. Morgan , Yngve Falck-Ytter , Reem A. Mustafa , Shahnaz Sultan , Philipp Dahm , Madelin R. Siedler , Osama Altayar , Perica Davitkov , Syed Arsalan Ahmed Naqvi , Irbaz Bin Riaz , Zhen Wang , Lifeng Lin","doi":"10.1016/j.jclinepi.2024.111543","DOIUrl":"10.1016/j.jclinepi.2024.111543","url":null,"abstract":"<div><h3>Objectives</h3><div>To explore the use of prediction interval (PI) for the simultaneous evaluation of the imprecision and inconsistency domains of Grading of Recommendations, Assessment, and Evaluation using stakeholder-provided decision thresholds.</div></div><div><h3>Study Design and Setting</h3><div>We propose transforming the PI of a meta-analysis from a relative risk scale to an absolute risk difference using an appropriate baseline risk. The transformed PI is compared to stakeholder-provided thresholds on an absolute scale. We applied this approach to a large convenience sample of meta-analyses extracted from the Cochrane Database of Systematic Reviews and compared it against the traditional approach of rating imprecision and inconsistency separately using confidence intervals and statistical measures of heterogeneity, respectively. We used empirically derived thresholds following Grading of Recommendations, Assessment, and Evaluation guidance.</div></div><div><h3>Results</h3><div>The convenience sample consisted of 2516 meta-analyses (median of 7 studies per meta-analysis; interquartile range: 5–11). The main analysis showed the percentage of meta-analyses in which both approaches had the same number of certainty levels rated down was 59%. The PI approach led to more levels of rating down (lower certainty) in 27% and to fewer levels of rating down (higher certainty) in 14%. Multiple sensitivity analyses using different thresholds showed similar results, but the PI approach had particularly increased width with a larger number of included studies and higher I<sup>2</sup> values.</div></div><div><h3>Conclusion</h3><div>Using the PI for simultaneous evaluation of imprecision and inconsistency seems feasible and logical but can lead to lower certainty ratings. The PI-based approach requires further testing in future systematic reviews and guidelines using context-specific thresholds and evidence-to-decision criteria.</div></div><div><h3>Plain Language Summary</h3><div>The prediction interval (PI) addresses both the imprecision and inconsistency domains of certainty. In this study, we applied this PI approach to simultaneously judge both domains and compared this to the traditional approach of making these separate judgments. The 2 approaches had moderate agreement. The PI-based approach requires further testing in future systematic reviews and guidelines using context-specific thresholds and evidence-to-decision criteria.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111543"},"PeriodicalIF":7.3,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A cross-sectional study assessing visual abstracts of randomized trials revealed inadequate reporting and high prevalence of spin","authors":"Melissa Duran , Isabelle Boutron , Sally Hopewell , Hillary Bonnet , Stephanie Sidorkiewicz","doi":"10.1016/j.jclinepi.2024.111544","DOIUrl":"10.1016/j.jclinepi.2024.111544","url":null,"abstract":"<div><h3>Objectives</h3><div>Visual abstracts (VAs) lack study-specific reporting guidelines and are increasingly used as stand-alone sources in medical research dissemination although not designed for this purpose. Therefore, our objectives were to describe 1) completeness of reporting in VAs and corresponding written abstracts (WAs) of randomized controlled trials (RCTs), and 2) the extent and type of spin (ie, any reporting pattern that could distort result interpretation and mislead readers) in VAs and WAs of RCTs with a statistically nonsignificant primary outcome.</div></div><div><h3>Study Design and Setting</h3><div>We conducted a cross-sectional study evaluating VAs and WAs of RCTs published between January 1, 2021, and March 3, 2023. We searched MEDLINE via PubMed for reports of RCTs published in the 15 highest impact factor journals from six medical fields (among which 34 journals producing VAs of RCTs were identified). One reviewer identified primary reports of RCTs published with a VA and randomly selected a maximum of 10 reports from each journal to avoid overrepresentation. The completeness of reporting assessment was based on the Consolidated Standards of Reporting Trials extension for abstracts. Spin was explored using a standardized spin classification for RCTs with statistically nonsignificant primary outcome results. Both assessments were conducted in duplicate, with discussion until consensus in case of discrepancy.</div></div><div><h3>Results</h3><div>A random sample of 253 reports from 34 journals was identified. The information provided in VAs was frequently incomplete: primary outcome identification, primary outcome results, and harms were respectively described or displayed in only 47% (<em>n</em> = 116/247), 30% (<em>n</em> = 75/247), and 35% (<em>n</em> = 88/253). Reporting was slightly better for some items in WAs, although still unsatisfactory. Among trials with nonsignificant primary outcome results (<em>n</em> = 101), 57% (<em>n</em> = 58) of the VAs and 55% (<em>n</em> = 56) of the WAs exhibited at least 1 type of spin. Posthoc analyses showed VAs produced by journal editors of high-impact general medical journals were more complete and more accurate than those produced by specialty journals or authors.</div></div><div><h3>Conclusion</h3><div>The information conveyed in VAs was frequently incomplete and inaccurate, highlighting the urgent need to refer to appropriate specific reporting guidelines to avoid misinterpretation by readers.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111544"},"PeriodicalIF":7.3,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mina Tadrous , Theresa Aves , Christine Fahim , Jessica Riad , Nicole Mittmann , Daniel Prieto-Alhambra , Donna R. Rivera , Kelvin Chan , Lisa M. Lix , Seamus Kent , Dalia Dawoud , Jason Robert Guertin , James Ted McDonald , Jeff Round , Scott Klarenbach , Sanja Stanojevic , Mary A. De Vera , Erin Strumpf , Robert W. Platt , Farah Husein , Kaleen N. Hayes
{"title":"Development of a Canadian Guidance for reporting real-world evidence for regulatory and health-technology assessment (HTA) decision-making","authors":"Mina Tadrous , Theresa Aves , Christine Fahim , Jessica Riad , Nicole Mittmann , Daniel Prieto-Alhambra , Donna R. Rivera , Kelvin Chan , Lisa M. Lix , Seamus Kent , Dalia Dawoud , Jason Robert Guertin , James Ted McDonald , Jeff Round , Scott Klarenbach , Sanja Stanojevic , Mary A. De Vera , Erin Strumpf , Robert W. Platt , Farah Husein , Kaleen N. Hayes","doi":"10.1016/j.jclinepi.2024.111545","DOIUrl":"10.1016/j.jclinepi.2024.111545","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Real-world evidence (RWE) can complement and fill knowledge gaps from randomized controlled trials to assist in health-technology assessment (HTA) for regulatory decision-making. However, the generation of RWE is an intricate process with many sequential decision points, and different methods and approaches may impact the quality and reliability of evidence. Standardization and transparency in reporting these decisions is imperative to appraise RWE and incorporate it into HTA decision-making. A partnership between Canadian health system stakeholders, namely, Health Canada and Canada’s Drug Agency (formerly the Canadian Agency for Drugs and Technologies in Health), was established to develop guidance for the standardization of reporting of RWE for regulatory and HTA decision-making in Canada.</div></div><div><h3>Study Design and Setting</h3><div>A collaborative initiative to create structured guidance for RWE reporting in the context of regulatory and HTA decision-making.</div></div><div><h3>Results</h3><div>The developed guidance aims to standardize and ensure transparent reporting of RWE to improve its reliability and usefulness in regulatory and HTA processes.</div></div><div><h3>Conclusion</h3><div>This guidance can be adapted for other jurisdictions and will have future extensions to incorporate emerging issues with RWE and HTA decision-making.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111545"},"PeriodicalIF":7.3,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander D. Sherry , Pavlos Msaouel , Ethan B. Ludmir
{"title":"A meta-epidemiological analysis of post-hoc comparisons and primary endpoint interpretability among randomized noncomparative trials in clinical medicine","authors":"Alexander D. Sherry , Pavlos Msaouel , Ethan B. Ludmir","doi":"10.1016/j.jclinepi.2024.111540","DOIUrl":"10.1016/j.jclinepi.2024.111540","url":null,"abstract":"<div><h3>Objectives</h3><div>Randomized noncomparative trials (RNCTs) promise reduced accrual requirements vs randomized controlled comparative trials because RNCTs do not enroll a control group and instead compare outcomes to historical controls or prespecified estimates. We hypothesized that RNCTs often suffer from two methodological concerns: (1) lack of interpretability due to group-specific inferences in nonrandomly selected samples and (2) misinterpretation due to unlicensed between-group comparisons lacking prespecification. The purpose of this study was to characterize RNCTs and the incidence of these two methodological concerns.</div></div><div><h3>Study Design and Setting</h3><div>We queried PubMed and Web of Science on September 14, 2023, to conduct a meta-epidemiological analysis of published RNCTs in any field of medicine. Trial characteristics and the incidence of methodological concerns were manually recorded.</div></div><div><h3>Results</h3><div>We identified 70 RNCTs published from 2002 to 2023. RNCTs have been increasingly published over time (slope = 0.28, 95% CI 0.17–0.39, P < .001). Sixty trials (60/70, 86%) had a lack of interpretability for the primary endpoint due to group-specific inferences. Unlicensed between-group comparisons were present in 36 trials (36/70, 51%), including in the primary conclusion of 31 trials (31/70, 44%), and were accompanied by significance testing in 20 trials (20/70, 29%). Only five (5/70, 7%) trials were found to have neither of these flaws.</div></div><div><h3>Conclusion</h3><div>Although RNCTs are increasingly published over time, the primary analysis of nearly all published RNCTs in the medical literature was unsupported by their fundamental underlying methodological assumptions. RNCTs promise group-specific inference, which they are unable to deliver, and undermine the primary advantage of randomization, which is comparative inference. The ongoing use of the RNCT design in lieu of a traditional randomized controlled comparative trial should therefore be reconsidered.</div></div><div><h3>Plain Language Summary</h3><div>The typical way that doctors can learn whether new drugs are helpful is through a clinical trial. Often, doctors compare these new treatments to the control treatment being used in standard clinical practice. When researchers want to compare different treatments, they may decide to randomly assign one treatment or the other to trial participants. Like flipping a coin, randomly deciding which treatment to use can help researchers make the best comparisons between the new and control treatment by limiting certain biases. These trials are called “randomized comparative trials” and are the most common way researchers can improve medicine. A newer type of trial, called a “randomized noncomparative trial,” has become increasingly popular in medicine. Like randomized comparative trials, this type of trial randomly decides which treatment participants receive. However, the “random","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111540"},"PeriodicalIF":7.3,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra Griessbach , Benjamin Speich , Alain Amstutz , Lena Hausheer , Manuela Covino , Hillary Wnfried Ramirez , Stefan Schandelmaier , Ala Taji Heravi , Shaun Treweek , Matthias Schwenkglenks , Matthias Briel
{"title":"Resource use and costs of investigator-sponsored randomized clinical trials in Switzerland, Germany, and the United Kingdom: a metaresearch study","authors":"Alexandra Griessbach , Benjamin Speich , Alain Amstutz , Lena Hausheer , Manuela Covino , Hillary Wnfried Ramirez , Stefan Schandelmaier , Ala Taji Heravi , Shaun Treweek , Matthias Schwenkglenks , Matthias Briel","doi":"10.1016/j.jclinepi.2024.111536","DOIUrl":"10.1016/j.jclinepi.2024.111536","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Conducting high-quality randomized clinical trials (RCTs) is challenging and resource intensive. Funders and academic investigators depend on limited financial resources and, therefore, need empirical data for optimal budget planning. However, current literature lacks detailed empirical data on resource use and costs of investigator-sponsored RCTs. The aim of this study is to systematically collect cost data from investigator-sponsored RCTs from Switzerland, Germany, and the United Kingdom (UK).</div></div><div><h3>Methods</h3><div>Principal investigators were asked to share their RCT cost and resource use data and enter it into an online case report form. We assessed cost patterns, cost drivers, and specific cost items, examined costs by study phase (planning-, conduct-, and finalization phase), compared planned with actual RCT costs, and explored differences in cost patterns across countries, medical fields, and intervention types.</div></div><div><h3>Results</h3><div>We included 93 RCTs which were initiated in Switzerland (<em>n</em> = 53; including eight conducted in low- and lower middle-income countries), Germany (<em>n</em> = 22), and the UK (<em>n</em> = 18). The median total trial cost in our RCT sample was $645,824 [interquartile range (IQR), $269,846–$1,577,924]. The median proportion of the total costs spent for planning phase was 27.5% [IQR, 20.6%–39.7%], for conduct phase 57.3% [IQR, 44.4%–66.3%], and for finalization phase 12.7% [IQR, 8.5%–19.3%] with little variation across countries. The items that contributed most to the total costs were protocol writing (7.2%; IQR 3.8%–10.6%), data management (5.0%; IQR 2.2%–8.1%) and follow-up (4.5%; IQR 2.3%–8.4%). Of the 66 RCTs with an available original budget, 46 (69.7%) exceeded the budget by over 50%. Use of routinely collected data to assess primary outcomes was independently associated with lower per patient- and lower total trial costs.</div></div><div><h3>Conclusion</h3><div>Over a quarter of total trial costs were incurred in the planning phase, which is typically not fully funded. Two-thirds of RCTs exceeded their budget by more than 50%. Investigators and funders should consider empirical cost data to improve budgeting and funding practices.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111536"},"PeriodicalIF":7.3,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changjin Wu , Jun Hao , Yu Xin , Ruomeng Song , Wentan Li , Ling Zuo , Xiyan Zhang , Yuanyi Cai , Huazhang Wu , Wen Hui
{"title":"Poor sample size reporting quality and insufficient sample size in economic evaluations conducted alongside pragmatic trials: a cross-sectional survey","authors":"Changjin Wu , Jun Hao , Yu Xin , Ruomeng Song , Wentan Li , Ling Zuo , Xiyan Zhang , Yuanyi Cai , Huazhang Wu , Wen Hui","doi":"10.1016/j.jclinepi.2024.111535","DOIUrl":"10.1016/j.jclinepi.2024.111535","url":null,"abstract":"<div><h3>Objectives</h3><div>Economic evaluations based on well-designed and -conducted pragmatic randomized controlled trials (pRCTs) can provide valuable evidence on the cost-effectiveness of interventions, enhancing the relevance and applicability of findings to healthcare decision-making. However, economic evaluation outcomes are seldom taken into consideration during the process of sample size calculation in pragmatic trials. The reporting quality of sample size and information on its calculation in economic evaluations that are well-suited to pRCTs remain unknown. This study aims to assess the reporting quality of sample size and estimate the power values of economic evaluations in pRCTs.</div></div><div><h3>Study Design and Setting</h3><div>We conducted a cross-sectional survey using data of pRCTs available from PubMed and OVID from 1 January 2010 to 24 April 2022. Two groups of independent reviewers identified articles; three groups of reviewers each extracted the data. Descriptive statistics presented the general characteristics of included studies. Statistical power analyses were performed on clinical and economic outcomes with sufficient data.</div></div><div><h3>Results</h3><div>The electronic search identified 715 studies and 152 met the inclusion criteria. Of these, 26 were available for power analysis. Only 9 out of 152 trials (5.9%) considered economic outcomes when estimating sample size, and only one adjusted the sample size accordingly. Power values for trial-based economic evaluations and clinical trials ranged from 2.56% to 100% and 3.21%–100%, respectively. Regardless of the perspectives, in 14 out of the 26 studies (53.8%), the power values of economic evaluations for quality-adjusted life years (QALYs) were lower than those of clinical trials for primary endpoints (PEs). In 11 out of the 24 (45.8%) and in 8 out of the 13 (61.5%) studies, power values of economic evaluations for QALYs were lower than those of clinical trials for PEs from the healthcare and societal perspectives, respectively. Power values of economic evaluations for non-QALYs from the healthcare and societal perspectives were potentially higher than those of clinical trials in 3 out of the 4 studies (75%). The power values for economic outcomes in Q1 were not higher than those for other journal impact factor quartile categories.</div></div><div><h3>Conclusion</h3><div>Theoretically, pragmatic trials with concurrent economic evaluations can provide real-world evidence for healthcare decision makers. However, in pRCT-based economic evaluations, limited consideration, and inadequate reporting of sample-size calculations for economic outcomes could negatively affect the results’ reliability and generalisability. We thus recommend that future pragmatic trials with economic evaluations should report how sample sizes are determined or adjusted based on the economic outcomes in their protocols to enhance their transparency and evidence quality.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111535"},"PeriodicalIF":7.3,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan D. Florez , Juan E. De La Cruz-Mena , Areti-Angeliki Veroniki
{"title":"Network meta-analysis: a powerful tool for clinicians, decision-makers, and methodologists","authors":"Ivan D. Florez , Juan E. De La Cruz-Mena , Areti-Angeliki Veroniki","doi":"10.1016/j.jclinepi.2024.111537","DOIUrl":"10.1016/j.jclinepi.2024.111537","url":null,"abstract":"<div><div>Network Meta-analysis (NMA) is an advanced statistical method that combines direct evidence (ie, from head-to-head comparisons) and indirect evidence (ie, estimated from the direct available evidence) to obtain network estimates. NMAs are helpful to determine the comparative effectiveness of interventions that have not been directly compared and may provide more precise estimates for those comparisons that have been directly compared. NMA provides hierarchies which can be helpful for decision-making, much more in scenarios when multiple interventions exist for the same indication. In this article we provide a summary of the key concepts that users, namely, clinicians and methodologists need to consider when using an NMA to inform decision making.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"176 ","pages":"Article 111537"},"PeriodicalIF":7.3,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}