{"title":"Influence function-based empirical likelihood for area under the receiver operating characteristic curve in presence of covariates.","authors":"Baoying Yang, Xinjie Hu, Gengsheng Qin","doi":"10.1177/09622802251345343","DOIUrl":"10.1177/09622802251345343","url":null,"abstract":"<p><p>In receiver operating characteristicROC analysis, the area under the ROC curve (AUC) is a popular one number summary of the discriminatory accuracy of a diagnostic test. AUC measures the overall diagnostic accuracy of a test but fails to account for the effect of covariates when covariates are present and associated with the test results. Adjustment for covariate effects can greatly improve the diagnostic accuracy of a test. In this paper, using information provided by the influence function, empirical likelihood (EL) methods are proposed for inferences of AUC in presence of covariates. For parameters in the AUC regression model, it is shown that the asymptotic distribution of the influence function-based empirical log-likelihood ratio statistic is a standard chi-square distribution. Hence, confidence regions for the regression parameters can be obtained without any variance estimation. Simulation studies are conducted to compare the finite sample performances of the proposed EL based methods with the existing normal approximation (NA) based method in the AUC regression. Simulation results indicate that the bootstrap-calibrated influence function-based empirical likelihood (BIFEL ) confidence region outperforms the NA-based confidence region in terms of coverage probability. We also propose an interval estimation method for the covariate-adjusted AUC based on the BIFEL confidence region. Finally, we illustrate the recommended method with a real prostate-specific antigen data example.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1574-1589"},"PeriodicalIF":1.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Group sequential analysis of marked point processes: Plasma donation trials.","authors":"Kecheng Li, Richard J Cook","doi":"10.1177/09622802251350263","DOIUrl":"10.1177/09622802251350263","url":null,"abstract":"<p><p>Plasma donation plays a critical role in modern medicine by providing lifesaving treatments for patients with a wide range of conditions like bleeding disorders, immune deficiencies, and infections. Evaluation of devices used to collect blood plasma from donors is essential to ensure donor safety. We consider the design of plasma donation trials when the goal is to assess the safety of a new device on the response to transfusions compared to the standard device. A unique feature is that the number of donations per donor varies substantially so some individuals contribute more information and others less. The sample size formula is derived to ensure power requirements are met when analyses are based on generalized estimating equations and robust variance estimation. Strategies for interim monitoring based on group sequential designs using alpha spending functions are developed based on a robust covariance matrix for estimates of treatment effect over successive analyses. The design of a plasma donation study is illustrated where the focus is on assessing the safety of a new device with serious hypotensive adverse events as the primary outcome.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1646-1664"},"PeriodicalIF":1.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365355/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144544952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guogen Shan, Yahui Zhang, Guoqiao Wang, Samuel S Wu, Aidong A Ding
{"title":"Closed-form confidence intervals for saved time using summary statistics in Alzheimer's disease studies.","authors":"Guogen Shan, Yahui Zhang, Guoqiao Wang, Samuel S Wu, Aidong A Ding","doi":"10.1177/09622802251348796","DOIUrl":"10.1177/09622802251348796","url":null,"abstract":"<p><p>Saved time is used in Alzheimer's disease (AD) trials as an easy interpretation of the treatment benefit to communicate with patients, family members, and caregivers. The projection approach is frequently applied to estimate saved time and its confidence interval (CI) by using the placebo or treatment disease progression curves. The estimated standard error of saved time by using these existing methods does not account for the correlation between outcomes. In addition, there was no closed-form CI for researchers to use in practice. To fill this critical gap, we derive the closed-form CI for saved time estimated from the placebo or treatment disease progression curves. We compare them with regard to coverage probability and interval width under various disease progression patterns that are commonly observed in AD symptomatic therapy and disease-modifying therapy trials. Data from the phase 3 donanemab trials are used to illustrate the application of the new CI methods.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1605-1616"},"PeriodicalIF":1.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple imputation for systematically missing effect modifiers in individual participant data meta-analysis.","authors":"Robert Thiesmeier, Scott M Hofer, Nicola Orsini","doi":"10.1177/09622802251348800","DOIUrl":"10.1177/09622802251348800","url":null,"abstract":"<p><p>Individual participant data (IPD) meta-analysis of randomised trials is a crucial method for detecting and investigating effect modifications in medical research. However, few studies have explored scenarios involving systematically missing data on discrete effect modifiers (EMs) in IPD meta-analyses with a limited number of trials. This simulation study examines the impact of systematic missing values in IPD meta-analysis using a two-stage imputation method. We simulated IPD meta-analyses of randomised trials with multiple studies that had systematically missing data on the EM. A multivariable Weibull survival model was specified to assess beneficial (Hazard Ratio (HR)<math><mo>=</mo></math>0.8), null (HR<math><mo>=</mo></math>1.0), and harmful (HR<math><mo>=</mo></math>1.2) treatment effects for low, medium, and high levels of an EM, respectively. Bias and coverage were evaluated using Monte-Carlo simulations. The absolute bias for common and heterogeneous effect IPD meta-analyses was less than 0.016 and 0.007, respectively, with coverage close to its nominal value across all EM levels. An uncongenial imputation model resulted in larger bias, even when the proportion of studies with systematically missing data on the EM was small. Overall, the proposed two-stage imputation approach provided unbiased estimates with improved precision. The assumptions and limitations of this approach are discussed.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1590-1604"},"PeriodicalIF":1.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144333871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian inference for nonlinear mixed-effects location scale and interval-censoring cure-survival models: An application to pregnancy miscarriage.","authors":"Danilo Alvares, Cristian Meza, Rolando De la Cruz","doi":"10.1177/09622802251345485","DOIUrl":"10.1177/09622802251345485","url":null,"abstract":"<p><p>Motivated by a pregnancy miscarriage study, we propose a Bayesian joint model for longitudinal and time-to-event outcomes that takes into account different complexities of the problem. In particular, the longitudinal process is modeled by means of a nonlinear specification with subject-specific error variance. In addition, the exact time of fetal death is unknown, and a subgroup of women is not susceptible to miscarriage. Hence, we model the survival process via a mixture cure model for interval-censored data. Finally, both processes are linked through the subject-specific longitudinal mean and variance. A simulation study is conducted in order to validate our joint model. In the real application, we use individual weighted and Cox-Snell residuals to assess the goodness-of-fit of our proposal versus a joint model that shares only the subject-specific longitudinal mean (standard approach). In addition, the leave-one-out cross-validation criterion is applied to compare the predictive ability of both models.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1525-1533"},"PeriodicalIF":1.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Strategies to boost statistical efficiency in randomized oncology trials with primary time-to-event endpoints.","authors":"Alan D Hutson, Han Yu","doi":"10.1177/09622802251343599","DOIUrl":"10.1177/09622802251343599","url":null,"abstract":"<p><p>Oncology clinical trials are increasingly expensive, necessitating efforts to streamline phase II and III trials to reduce costs and expedite treatment delivery. Randomization is often impractical in oncology trials due to small sample sizes and limited statistical power, leading to biased inferences. The FDA has recently published guidance documents encouraging the use of prognostic baseline measures to improve the precision of inferences around treatment effects. To address this, we propose an extension of Rosenbaum's exact testing method incorporating a variant of martingale residuals for right censored data. This method can dramatically improve the statistical power of the test comparing treatment arms given time-to-event endpoints as compared to the standard log-rank test. Additionally, the modification of the martingale residual provides a straightforward metric for summarizing treatment effect by quantifying the expected events per treatment arm at each time-point. This approach is illustrated using a phase II clinical trial in small cell lung cancer.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1534-1552"},"PeriodicalIF":1.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144476715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Group sequential designs for survival outcomes with adaptive randomization.","authors":"Yaxian Chen, Yeonhee Park","doi":"10.1177/09622802251340250","DOIUrl":"https://doi.org/10.1177/09622802251340250","url":null,"abstract":"<p><p>Driven by evolving Food and Drug Administration recommendations, modern clinical trials demand innovative designs that strike a balance between statistical rigor and ethical considerations. Covariate-adjusted response-adaptive randomization (CARA) designs bridge this gap by utilizing patient attributes and responses to skew treatment allocation in favor of the treatment to be best for an individual patient's profiles. However, existing CARA designs for survival outcomes often rely on specific parametric models, constraining their applicability in clinical practice. To overcome this limitation, we propose a novel CARA method for survival outcomes (called CARAS) based on the Cox model, which improves model flexibility and mitigate risks of model misspecification. Additionally, we introduce a group sequential overlap-weighted log-rank test to preserve the type I error rate in group sequential trials using CARAS. Comprehensive simulation studies and a real-world trial example demonstrate the proposed method's clinical benefit, statistical efficiency, and robustness to model misspecification compared to traditional randomized controlled trial designs and response-adaptive randomization designs.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251340250"},"PeriodicalIF":1.6,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144650606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal treatment regimes in the presence of a cure fraction.","authors":"Chenrui Qi, Zicheng Lin, Baqun Zhang, Cunjie Lin, Zishu Zhan","doi":"10.1177/09622802251338399","DOIUrl":"https://doi.org/10.1177/09622802251338399","url":null,"abstract":"<p><p>Despite the widespread use of time-to-event data in precision medicine, existing research has often neglected the presence of the cure fraction, assuming that all individuals will inevitably experience the event of interest. When a cure fraction is present, the cure rate and survival time of uncured patients should be considered in estimating the optimal individualized treatment regimes. In this study, we propose direct methods for estimating the optimal individualized treatment regimes that either maximize the cure rate or mean survival time of uncured patients. Additionally, we propose two optimal individualized treatment regimes that balance the tradeoff between the cure rate and mean survival time of uncured patients based on a constrained estimation framework for a more comprehensive assessment of individualized treatment regimes. This framework allows us to estimate the optimal individualized treatment regime that maximizes the population's cure rate without significantly compromising the mean survival time of those who remain uncured or maximizes the mean survival time of uncured patients while having the cure rate controlled at a desired level. The exterior-point algorithm is adopted to expedite the resolution of the constrained optimization problem and statistical validity is rigorously established. Furthermore, the advantages of the proposed methods are demonstrated via simulations and analysis of esophageal cancer data.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251338399"},"PeriodicalIF":1.6,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144650607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junhan Fang, Donna Spiegelman, Ashley L Buchanan, Laura Forastiere
{"title":"Design of egocentric network-based studies to estimate causal effects under interference.","authors":"Junhan Fang, Donna Spiegelman, Ashley L Buchanan, Laura Forastiere","doi":"10.1177/09622802251357021","DOIUrl":"https://doi.org/10.1177/09622802251357021","url":null,"abstract":"<p><p>Many public health interventions are conducted in settings where individuals are connected and the intervention assigned to some individuals may spill over to other individuals. In these settings, we can assess: (a) the individual effect on the treated, (b) the spillover effect on untreated individuals through an indirect exposure to the intervention, and (c) the overall effect on the whole population. Here, we consider an egocentric network-based randomized design in which a set of index participants is recruited and randomly assigned to treatment, while data are also collected on their untreated network members. Such a design is common in peer education interventions conceived to leverage behavioral influence among peers. Using the potential outcomes framework, we first clarify the assumptions required to rely on an identification strategy that is commonly used in the well-studied two-stage randomized design. Under these assumptions, causal effects can be jointly estimated using a regression model with a block-diagonal structure. We then develop sample size formulas for detecting individual, spillover, and overall effects for single and joint hypothesis tests, and investigate the role of different parameters. Finally, we illustrate the use of our sample size formulas for an egocentric network-based randomized experiment to evaluate a peer education intervention for HIV prevention.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251357021"},"PeriodicalIF":1.6,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144650605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semiparametric regression analysis of interval-censored failure time data with a cure subgroup and nonignorable missing covariates.","authors":"Yichen Lou, Mingyue Du, Peijie Wang, Xinyuan Song","doi":"10.1177/09622802251356592","DOIUrl":"https://doi.org/10.1177/09622802251356592","url":null,"abstract":"<p><p>This article discusses regression analysis of interval-censored failure time data in the presence of a cure fraction and nonignorable missing covariates. To address the challenges caused by interval censoring, missing covariates and the existence of a cure subgroup, we propose a joint semiparametric modeling framework that simultaneously models the failure time of interest and the missing covariates. In particular, we present a class of semiparametric nonmixture cure models for the failure time and a semiparametric density ratio model for the missing covariates. A two-step likelihood-based estimation procedure is developed and the large sample properties of the resulting estimators are established. An extensive numerical study demonstrates the good performance of the proposed method in practical settings and the proposed approach is applied to an Alzheimer's disease study that motivated this study.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251356592"},"PeriodicalIF":1.6,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}