Adélaïde Doussau, Patrick Kane, Jeffrey Peppercorn, Aden C Feustel, Sylviya Ganeshamoorthy, Natasha Kekre, Daniel M Benjamin, Jonathan Kimmelman
{"title":"The impact of feedback training on prediction of cancer clinical trial results.","authors":"Adélaïde Doussau, Patrick Kane, Jeffrey Peppercorn, Aden C Feustel, Sylviya Ganeshamoorthy, Natasha Kekre, Daniel M Benjamin, Jonathan Kimmelman","doi":"10.1177/17407745231203375","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Funders must make difficult decisions about which squared treatments to prioritize for randomized trials. Earlier research suggests that experts have no ability to predict which treatments will vindicate their promise. We tested whether a brief training module could improve experts' trial predictions.</p><p><strong>Methods: </strong>We randomized a sample of breast cancer and hematology-oncology experts to the presence or absence of a feedback training module where experts predicted outcomes for five recently completed randomized controlled trials and received feedback on accuracy. Experts then predicted primary outcome attainment for a sample of ongoing randomized controlled trials. Prediction skill was assessed by Brier scores, which measure the average deviation between their predictions and actual outcomes. Secondary outcomes were discrimination (ability to distinguish between positive and non-positive trials) and calibration (higher predictions reflecting higher probability of trials being positive).</p><p><strong>Results: </strong>A total of 148 experts (46 for breast cancer, 54 for leukemia, and 48 for lymphoma) were randomized between May and December 2017 and included in the analysis (1217 forecasts for 25 trials). Feedback did not improve prediction skill (mean Brier score for control: 0.22, 95% confidence interval = 0.20-0.24 vs feedback arm: 0.21, 95% confidence interval = 0.20-0.23; p = 0.51). Control and feedback arms showed similar discrimination (area under the curve = 0.70 vs 0.73, p = 0.24) and calibration (calibration index = 0.01 vs 0.01, p = 0.81). However, experts in both arms offered predictions that were significantly more accurate than uninformative forecasts of 50% (Brier score = 0.25).</p><p><strong>Discussion: </strong>A short training module did not improve predictions for cancer trial results. However, expert communities showed unexpected ability to anticipate positive trials.Pre-registration record: https://aspredicted.org/4ka6r.pdf.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"143-151"},"PeriodicalIF":2.2000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11005298/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Trials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/17407745231203375","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/10/24 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Funders must make difficult decisions about which squared treatments to prioritize for randomized trials. Earlier research suggests that experts have no ability to predict which treatments will vindicate their promise. We tested whether a brief training module could improve experts' trial predictions.
Methods: We randomized a sample of breast cancer and hematology-oncology experts to the presence or absence of a feedback training module where experts predicted outcomes for five recently completed randomized controlled trials and received feedback on accuracy. Experts then predicted primary outcome attainment for a sample of ongoing randomized controlled trials. Prediction skill was assessed by Brier scores, which measure the average deviation between their predictions and actual outcomes. Secondary outcomes were discrimination (ability to distinguish between positive and non-positive trials) and calibration (higher predictions reflecting higher probability of trials being positive).
Results: A total of 148 experts (46 for breast cancer, 54 for leukemia, and 48 for lymphoma) were randomized between May and December 2017 and included in the analysis (1217 forecasts for 25 trials). Feedback did not improve prediction skill (mean Brier score for control: 0.22, 95% confidence interval = 0.20-0.24 vs feedback arm: 0.21, 95% confidence interval = 0.20-0.23; p = 0.51). Control and feedback arms showed similar discrimination (area under the curve = 0.70 vs 0.73, p = 0.24) and calibration (calibration index = 0.01 vs 0.01, p = 0.81). However, experts in both arms offered predictions that were significantly more accurate than uninformative forecasts of 50% (Brier score = 0.25).
Discussion: A short training module did not improve predictions for cancer trial results. However, expert communities showed unexpected ability to anticipate positive trials.Pre-registration record: https://aspredicted.org/4ka6r.pdf.
期刊介绍:
Clinical Trials is dedicated to advancing knowledge on the design and conduct of clinical trials related research methodologies. Covering the design, conduct, analysis, synthesis and evaluation of key methodologies, the journal remains on the cusp of the latest topics, including ethics, regulation and policy impact.