从随机对照试验中评估试验设计改变的治疗效果。

IF 2.2 3区 医学 Q3 MEDICINE, RESEARCH & EXPERIMENTAL
Clinical Trials Pub Date : 2025-04-01 Epub Date: 2024-12-30 DOI:10.1177/17407745241304120
Sudeshna Paul, Jaeun Choi, Mi-Kyung Song
{"title":"从随机对照试验中评估试验设计改变的治疗效果。","authors":"Sudeshna Paul, Jaeun Choi, Mi-Kyung Song","doi":"10.1177/17407745241304120","DOIUrl":null,"url":null,"abstract":"<p><p>BackgroundIn randomized controlled trials (RCTs), unplanned design modifications due to unexpected circumstances are seldom reported. Naively lumping data from pre- and post-design changes to estimate the size of the treatment effect, as planned in the original study, can introduce systematic bias and limit interpretability of the trial findings. There has been limited discussion on how to estimate the treatment effect when an RCT undergoes major design changes during the trial. Using our recently completed RCT, which underwent multiple design changes, as an example, we examined the statistical implications of design changes on the treatment effect estimates.MethodsOur example RCT aimed to test an advance care planning intervention targeting dementia patients and their surrogate decision-makers compared to usual care. The original trial underwent two major mid-trial design changes resulting in three smaller studies. The changes included altering the number of study arms and adding new recruitment sites, thus perturbing the initial statistical assumptions. We used a simulation study to mimic these design modifications in our RCT, generate independent patient-level data and evaluate naïve lumping of data, a two-stage fixed-effect and random-effect meta-analysis model to obtain an average effect size estimate from all studies. Standardized mean-difference and odds-ratio estimates at post-intervention were used as effect sizes for continuous and binary outcomes, respectively. The performance of the estimates from different methods were compared by studying their statistical properties (e.g. bias, mean squared error, and coverage probability of 95% confidence intervals).ResultsWhen between-design heterogeneity is negligible, the fixed- and random-effect meta-analysis models yielded accurate and precise effect-size estimates for both continuous and binary data. As between-design heterogeneity increased, the estimates from random meta-analysis methods indicated less bias and higher coverage probability compared to the naïve and fixed-effect methods, however the mean squared error was higher indicating greater uncertainty arising from a small number of studies. The between-study heterogeneity parameter was not precisely estimable due to fewer studies. With increasing sample sizes within each study, the effect-size estimates showed improved precision and statistical power.ConclusionsWhen a trial undergoes unplanned major design changes, the statistical approach to estimate the treatment effect needs to be determined carefully. Naïve lumping of data across designs is not appropriate even when the overall goal of the trial remains unchanged. Understanding the implications of the different aspects of design changes and accounting for them in the analysis of the data are essential for internal validity and reporting of the trial findings. Importantly, investigators must disclose the design changes clearly in their study reports.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":"22 2","pages":"209-219"},"PeriodicalIF":2.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996067/pdf/","citationCount":"0","resultStr":"{\"title\":\"Estimating treatment effects from a randomized controlled trial with mid-trial design changes.\",\"authors\":\"Sudeshna Paul, Jaeun Choi, Mi-Kyung Song\",\"doi\":\"10.1177/17407745241304120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>BackgroundIn randomized controlled trials (RCTs), unplanned design modifications due to unexpected circumstances are seldom reported. Naively lumping data from pre- and post-design changes to estimate the size of the treatment effect, as planned in the original study, can introduce systematic bias and limit interpretability of the trial findings. There has been limited discussion on how to estimate the treatment effect when an RCT undergoes major design changes during the trial. Using our recently completed RCT, which underwent multiple design changes, as an example, we examined the statistical implications of design changes on the treatment effect estimates.MethodsOur example RCT aimed to test an advance care planning intervention targeting dementia patients and their surrogate decision-makers compared to usual care. The original trial underwent two major mid-trial design changes resulting in three smaller studies. The changes included altering the number of study arms and adding new recruitment sites, thus perturbing the initial statistical assumptions. We used a simulation study to mimic these design modifications in our RCT, generate independent patient-level data and evaluate naïve lumping of data, a two-stage fixed-effect and random-effect meta-analysis model to obtain an average effect size estimate from all studies. Standardized mean-difference and odds-ratio estimates at post-intervention were used as effect sizes for continuous and binary outcomes, respectively. The performance of the estimates from different methods were compared by studying their statistical properties (e.g. bias, mean squared error, and coverage probability of 95% confidence intervals).ResultsWhen between-design heterogeneity is negligible, the fixed- and random-effect meta-analysis models yielded accurate and precise effect-size estimates for both continuous and binary data. As between-design heterogeneity increased, the estimates from random meta-analysis methods indicated less bias and higher coverage probability compared to the naïve and fixed-effect methods, however the mean squared error was higher indicating greater uncertainty arising from a small number of studies. The between-study heterogeneity parameter was not precisely estimable due to fewer studies. With increasing sample sizes within each study, the effect-size estimates showed improved precision and statistical power.ConclusionsWhen a trial undergoes unplanned major design changes, the statistical approach to estimate the treatment effect needs to be determined carefully. Naïve lumping of data across designs is not appropriate even when the overall goal of the trial remains unchanged. Understanding the implications of the different aspects of design changes and accounting for them in the analysis of the data are essential for internal validity and reporting of the trial findings. Importantly, investigators must disclose the design changes clearly in their study reports.</p>\",\"PeriodicalId\":10685,\"journal\":{\"name\":\"Clinical Trials\",\"volume\":\"22 2\",\"pages\":\"209-219\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996067/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Trials\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/17407745241304120\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/12/30 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, RESEARCH & EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Trials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/17407745241304120","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/30 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

在随机对照试验(RCTs)中,由于意外情况而导致的计划外设计修改很少被报道。按照原始研究的计划,天真地将设计前和设计后的数据集中起来,以估计治疗效果的大小,可能会引入系统偏倚,并限制试验结果的可解释性。当一项随机对照试验在试验期间发生重大设计变化时,关于如何评估治疗效果的讨论有限。以我们最近完成的RCT为例,它经历了多次设计更改,我们检查了设计更改对治疗效果估计的统计含义。方法一项示例RCT旨在测试针对痴呆患者及其替代决策者的预先护理计划干预与常规护理的比较。最初的试验经历了两次主要的试验中期设计变更,导致三个较小的研究。这些变化包括改变研究组的数量和增加新的招募地点,从而扰乱了最初的统计假设。我们在我们的RCT中使用模拟研究来模拟这些设计修改,生成独立的患者水平数据,并评估naïve数据集总,两阶段固定效应和随机效应荟萃分析模型,以获得所有研究的平均效应大小估计。干预后的标准化均差和优势比估计值分别用作连续和二元结果的效应量。通过研究不同方法的统计特性(例如偏差、均方误差和95%置信区间的覆盖概率),比较了不同方法估计的性能。当设计间异质性可以忽略不计时,固定效应和随机效应荟萃分析模型对连续和二元数据均产生了准确和精确的效应大小估计。随着设计间异质性的增加,随机荟萃分析方法的估计值与naïve和固定效应方法相比,偏倚更小,覆盖概率更高,但均方误差更高,表明研究数量少,不确定性更大。由于研究较少,研究间异质性参数不能精确估计。随着每项研究中样本量的增加,效应量估计显示出精度和统计能力的提高。结论当试验发生计划外的重大设计变化时,评估治疗效果的统计方法需要仔细确定。Naïve即使在试验的总体目标保持不变的情况下,跨设计的数据集中也是不合适的。了解设计变化的不同方面的影响,并在数据分析中考虑它们,对于试验结果的内部有效性和报告至关重要。重要的是,研究者必须在他们的研究报告中清楚地披露设计变更。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Estimating treatment effects from a randomized controlled trial with mid-trial design changes.

BackgroundIn randomized controlled trials (RCTs), unplanned design modifications due to unexpected circumstances are seldom reported. Naively lumping data from pre- and post-design changes to estimate the size of the treatment effect, as planned in the original study, can introduce systematic bias and limit interpretability of the trial findings. There has been limited discussion on how to estimate the treatment effect when an RCT undergoes major design changes during the trial. Using our recently completed RCT, which underwent multiple design changes, as an example, we examined the statistical implications of design changes on the treatment effect estimates.MethodsOur example RCT aimed to test an advance care planning intervention targeting dementia patients and their surrogate decision-makers compared to usual care. The original trial underwent two major mid-trial design changes resulting in three smaller studies. The changes included altering the number of study arms and adding new recruitment sites, thus perturbing the initial statistical assumptions. We used a simulation study to mimic these design modifications in our RCT, generate independent patient-level data and evaluate naïve lumping of data, a two-stage fixed-effect and random-effect meta-analysis model to obtain an average effect size estimate from all studies. Standardized mean-difference and odds-ratio estimates at post-intervention were used as effect sizes for continuous and binary outcomes, respectively. The performance of the estimates from different methods were compared by studying their statistical properties (e.g. bias, mean squared error, and coverage probability of 95% confidence intervals).ResultsWhen between-design heterogeneity is negligible, the fixed- and random-effect meta-analysis models yielded accurate and precise effect-size estimates for both continuous and binary data. As between-design heterogeneity increased, the estimates from random meta-analysis methods indicated less bias and higher coverage probability compared to the naïve and fixed-effect methods, however the mean squared error was higher indicating greater uncertainty arising from a small number of studies. The between-study heterogeneity parameter was not precisely estimable due to fewer studies. With increasing sample sizes within each study, the effect-size estimates showed improved precision and statistical power.ConclusionsWhen a trial undergoes unplanned major design changes, the statistical approach to estimate the treatment effect needs to be determined carefully. Naïve lumping of data across designs is not appropriate even when the overall goal of the trial remains unchanged. Understanding the implications of the different aspects of design changes and accounting for them in the analysis of the data are essential for internal validity and reporting of the trial findings. Importantly, investigators must disclose the design changes clearly in their study reports.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Clinical Trials
Clinical Trials 医学-医学:研究与实验
CiteScore
4.10
自引率
3.70%
发文量
82
审稿时长
6-12 weeks
期刊介绍: Clinical Trials is dedicated to advancing knowledge on the design and conduct of clinical trials related research methodologies. Covering the design, conduct, analysis, synthesis and evaluation of key methodologies, the journal remains on the cusp of the latest topics, including ethics, regulation and policy impact.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信