{"title":"从随机对照试验中评估试验设计改变的治疗效果。","authors":"Sudeshna Paul, Jaeun Choi, Mi-Kyung Song","doi":"10.1177/17407745241304120","DOIUrl":null,"url":null,"abstract":"<p><p>BackgroundIn randomized controlled trials (RCTs), unplanned design modifications due to unexpected circumstances are seldom reported. Naively lumping data from pre- and post-design changes to estimate the size of the treatment effect, as planned in the original study, can introduce systematic bias and limit interpretability of the trial findings. There has been limited discussion on how to estimate the treatment effect when an RCT undergoes major design changes during the trial. Using our recently completed RCT, which underwent multiple design changes, as an example, we examined the statistical implications of design changes on the treatment effect estimates.MethodsOur example RCT aimed to test an advance care planning intervention targeting dementia patients and their surrogate decision-makers compared to usual care. The original trial underwent two major mid-trial design changes resulting in three smaller studies. The changes included altering the number of study arms and adding new recruitment sites, thus perturbing the initial statistical assumptions. We used a simulation study to mimic these design modifications in our RCT, generate independent patient-level data and evaluate naïve lumping of data, a two-stage fixed-effect and random-effect meta-analysis model to obtain an average effect size estimate from all studies. Standardized mean-difference and odds-ratio estimates at post-intervention were used as effect sizes for continuous and binary outcomes, respectively. The performance of the estimates from different methods were compared by studying their statistical properties (e.g. bias, mean squared error, and coverage probability of 95% confidence intervals).ResultsWhen between-design heterogeneity is negligible, the fixed- and random-effect meta-analysis models yielded accurate and precise effect-size estimates for both continuous and binary data. As between-design heterogeneity increased, the estimates from random meta-analysis methods indicated less bias and higher coverage probability compared to the naïve and fixed-effect methods, however the mean squared error was higher indicating greater uncertainty arising from a small number of studies. The between-study heterogeneity parameter was not precisely estimable due to fewer studies. With increasing sample sizes within each study, the effect-size estimates showed improved precision and statistical power.ConclusionsWhen a trial undergoes unplanned major design changes, the statistical approach to estimate the treatment effect needs to be determined carefully. Naïve lumping of data across designs is not appropriate even when the overall goal of the trial remains unchanged. Understanding the implications of the different aspects of design changes and accounting for them in the analysis of the data are essential for internal validity and reporting of the trial findings. Importantly, investigators must disclose the design changes clearly in their study reports.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":"22 2","pages":"209-219"},"PeriodicalIF":2.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996067/pdf/","citationCount":"0","resultStr":"{\"title\":\"Estimating treatment effects from a randomized controlled trial with mid-trial design changes.\",\"authors\":\"Sudeshna Paul, Jaeun Choi, Mi-Kyung Song\",\"doi\":\"10.1177/17407745241304120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>BackgroundIn randomized controlled trials (RCTs), unplanned design modifications due to unexpected circumstances are seldom reported. Naively lumping data from pre- and post-design changes to estimate the size of the treatment effect, as planned in the original study, can introduce systematic bias and limit interpretability of the trial findings. There has been limited discussion on how to estimate the treatment effect when an RCT undergoes major design changes during the trial. Using our recently completed RCT, which underwent multiple design changes, as an example, we examined the statistical implications of design changes on the treatment effect estimates.MethodsOur example RCT aimed to test an advance care planning intervention targeting dementia patients and their surrogate decision-makers compared to usual care. The original trial underwent two major mid-trial design changes resulting in three smaller studies. The changes included altering the number of study arms and adding new recruitment sites, thus perturbing the initial statistical assumptions. We used a simulation study to mimic these design modifications in our RCT, generate independent patient-level data and evaluate naïve lumping of data, a two-stage fixed-effect and random-effect meta-analysis model to obtain an average effect size estimate from all studies. Standardized mean-difference and odds-ratio estimates at post-intervention were used as effect sizes for continuous and binary outcomes, respectively. The performance of the estimates from different methods were compared by studying their statistical properties (e.g. bias, mean squared error, and coverage probability of 95% confidence intervals).ResultsWhen between-design heterogeneity is negligible, the fixed- and random-effect meta-analysis models yielded accurate and precise effect-size estimates for both continuous and binary data. As between-design heterogeneity increased, the estimates from random meta-analysis methods indicated less bias and higher coverage probability compared to the naïve and fixed-effect methods, however the mean squared error was higher indicating greater uncertainty arising from a small number of studies. The between-study heterogeneity parameter was not precisely estimable due to fewer studies. With increasing sample sizes within each study, the effect-size estimates showed improved precision and statistical power.ConclusionsWhen a trial undergoes unplanned major design changes, the statistical approach to estimate the treatment effect needs to be determined carefully. Naïve lumping of data across designs is not appropriate even when the overall goal of the trial remains unchanged. Understanding the implications of the different aspects of design changes and accounting for them in the analysis of the data are essential for internal validity and reporting of the trial findings. Importantly, investigators must disclose the design changes clearly in their study reports.</p>\",\"PeriodicalId\":10685,\"journal\":{\"name\":\"Clinical Trials\",\"volume\":\"22 2\",\"pages\":\"209-219\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996067/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Trials\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/17407745241304120\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/12/30 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, RESEARCH & EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Trials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/17407745241304120","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/30 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
Estimating treatment effects from a randomized controlled trial with mid-trial design changes.
BackgroundIn randomized controlled trials (RCTs), unplanned design modifications due to unexpected circumstances are seldom reported. Naively lumping data from pre- and post-design changes to estimate the size of the treatment effect, as planned in the original study, can introduce systematic bias and limit interpretability of the trial findings. There has been limited discussion on how to estimate the treatment effect when an RCT undergoes major design changes during the trial. Using our recently completed RCT, which underwent multiple design changes, as an example, we examined the statistical implications of design changes on the treatment effect estimates.MethodsOur example RCT aimed to test an advance care planning intervention targeting dementia patients and their surrogate decision-makers compared to usual care. The original trial underwent two major mid-trial design changes resulting in three smaller studies. The changes included altering the number of study arms and adding new recruitment sites, thus perturbing the initial statistical assumptions. We used a simulation study to mimic these design modifications in our RCT, generate independent patient-level data and evaluate naïve lumping of data, a two-stage fixed-effect and random-effect meta-analysis model to obtain an average effect size estimate from all studies. Standardized mean-difference and odds-ratio estimates at post-intervention were used as effect sizes for continuous and binary outcomes, respectively. The performance of the estimates from different methods were compared by studying their statistical properties (e.g. bias, mean squared error, and coverage probability of 95% confidence intervals).ResultsWhen between-design heterogeneity is negligible, the fixed- and random-effect meta-analysis models yielded accurate and precise effect-size estimates for both continuous and binary data. As between-design heterogeneity increased, the estimates from random meta-analysis methods indicated less bias and higher coverage probability compared to the naïve and fixed-effect methods, however the mean squared error was higher indicating greater uncertainty arising from a small number of studies. The between-study heterogeneity parameter was not precisely estimable due to fewer studies. With increasing sample sizes within each study, the effect-size estimates showed improved precision and statistical power.ConclusionsWhen a trial undergoes unplanned major design changes, the statistical approach to estimate the treatment effect needs to be determined carefully. Naïve lumping of data across designs is not appropriate even when the overall goal of the trial remains unchanged. Understanding the implications of the different aspects of design changes and accounting for them in the analysis of the data are essential for internal validity and reporting of the trial findings. Importantly, investigators must disclose the design changes clearly in their study reports.
期刊介绍:
Clinical Trials is dedicated to advancing knowledge on the design and conduct of clinical trials related research methodologies. Covering the design, conduct, analysis, synthesis and evaluation of key methodologies, the journal remains on the cusp of the latest topics, including ethics, regulation and policy impact.