{"title":"Covariate-adjusted inference for doubly adaptive biased coin design.","authors":"Fuyi Tu, Wei Ma","doi":"10.1177/09622802251324750","DOIUrl":"https://doi.org/10.1177/09622802251324750","url":null,"abstract":"<p><p>Randomized controlled trials (RCTs) are pivotal for evaluating the efficacy of medical treatments and interventions, serving as a cornerstone in clinical research. In addition to randomization, achieving balances among multiple targets, such as statistical validity, efficiency, and ethical considerations, is also a central issue in RCTs. The doubly-adaptive biased coin design (DBCD) is notable for its high flexibility and efficiency in achieving any predetermined optimal allocation ratio and reducing variance for a given target allocation. However, DBCD does not account for abundant covariates that may be correlated with responses, which could further enhance trial efficiency. To address this limitation, this article explores the use of covariates in the analysis stage and evaluates the benefits of nonlinear covariate adjustment for estimating treatment effects. We propose a general framework to capture the intricate relationship between subjects' covariates and responses, supported by rigorous theoretical derivation and empirical validation via simulation study. Additionally, we introduce the use of sample splitting techniques for machine learning methods under DBCD, demonstrating the effectiveness of the corresponding estimators in high-dimensional cases. This paper aims to advance both the theoretical research and practical application of DBCD, thereby achieving more accurate and ethical clinical trials.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251324750"},"PeriodicalIF":1.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143670953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Libby Daniells, Pavel Mozgunov, Helen Barnett, Alun Bedding, Thomas Jaki
{"title":"How to add baskets to an ongoing basket trial with information borrowing.","authors":"Libby Daniells, Pavel Mozgunov, Helen Barnett, Alun Bedding, Thomas Jaki","doi":"10.1177/09622802251316961","DOIUrl":"https://doi.org/10.1177/09622802251316961","url":null,"abstract":"<p><p>Basket trials test a single therapeutic treatment on several patient populations under one master protocol. A desirable adaptive design feature is the ability to incorporate new baskets to an ongoing trial. Limited basket sample sizes can result in reduced power and precision of treatment effect estimates, which could be amplified in added baskets due to the shorter recruitment time. While various Bayesian information borrowing techniques have been introduced to tackle the issue of small sample sizes, the impact of including new baskets into the borrowing model has yet to be investigated. We explore approaches for adding baskets to an ongoing trial under information borrowing. Basket trials have pre-defined efficacy criteria to determine whether the treatment is effective for patients in each basket. The efficacy criteria are often calibrated a-priori in order to control the basket-wise type I error rate to a nominal level. Traditionally, this is done under a null scenario in which the treatment is ineffective in all baskets, however, we show that calibrating under this scenario alone will not guarantee error control under alternative scenarios. We propose a novel calibration approach that is more robust to false decision making. Simulation studies are conducted to assess the performance of the approaches for adding a basket, which is monitored through type I error rate control and power. The results display a substantial improvement in power for a new basket, however, this comes with potential inflation of error rates. We show that this can be reduced under the proposed calibration procedure.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251316961"},"PeriodicalIF":1.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143670959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shu-Han Wan, Hwa-Chi Liang, Hsiao-Hui Tsou, Hong-Dar Wu, Suojin Wang
{"title":"On estimation of overall treatment effects in multiregional clinical trials under a discrete random effects model.","authors":"Shu-Han Wan, Hwa-Chi Liang, Hsiao-Hui Tsou, Hong-Dar Wu, Suojin Wang","doi":"10.1177/09622802251319120","DOIUrl":"https://doi.org/10.1177/09622802251319120","url":null,"abstract":"<p><p>Multiregional clinical trials (MRCTs) have become a standard strategy for pharmaceutical product development worldwide. The heterogeneity of regional treatment effects is anticipated in an MRCT. For a two-group comparative study in an MRCT, patient assignments, including regional weights and treatment allocation ratios, are predetermined under the same protocol. In practice, the observed patient assignments at the final analysis stage are often not equal to the predetermined patient assignments, which may impact the accuracy of estimating the overall treatment effect and may lead to a biased estimator. In this study, we use a discrete random effects model (DREM) to account for the heterogeneous treatment effect across regions in an MRCT and propose a bias-adjusted estimator of the overall treatment effect through a naïve estimator conditioned on ancillary statistics based on the observed patient assignments at the final analysis stage in the trial. We also perform power analysis for the overall treatment effect and determine the overall sample size for the bias-adjusted estimator with the DREM. Results of simulation studies are given to illustrate applications of the proposed approach. Finally, we provide an example to demonstrate the implementation of the proposed approach.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251319120"},"PeriodicalIF":1.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143670983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The joint quantile regression modeling of mixed ordinal and continuous responses with its application to an obesity risk data.","authors":"Hong-Xia Zhang, Yu-Zhu Tian, Yue Wang, Mao-Zai Tian","doi":"10.1177/09622802251316974","DOIUrl":"https://doi.org/10.1177/09622802251316974","url":null,"abstract":"<p><p>In clinical medical health research, individual measurements sometimes appear as a mixture of ordinal and continuous responses. There are some statistical correlations between response indicators. Regarding the joint modeling of mixed responses, the effect of a set of explanatory variables on the conditional mean of mixed responses is usually studied based on a mean regression model. However, mean regression results tend to underperform for data with non-normal errors and outliers. Quantile regression (QR) offers not only robust estimates but also the ability to analyze the impact of explanatory variables on various quantiles of the response variable. In this paper, we propose a joint QR modeling approach for mixed ordinal and continuous responses and apply it to the analysis of a set of obesity risk data. Firstly, we construct the joint QR model for mixed ordinal and continuous responses based on multivariate asymmetric Laplace distribution and a latent variable model. Secondly, we perform parameter estimation of the model using a Markov chain Monte Carlo algorithm. Finally, Monte Carlo simulation and a set of obesity risk data analysis are used to verify the validity of the proposed model and method.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251316974"},"PeriodicalIF":1.6,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Betrand Fesuh Nono, Georges Nguefack-Tsague, Martin Kegnenlezom, Eugène-Patrice N Nguéma
{"title":"An iterative matrix uncertainty selector for high-dimensional generalized linear models with measurement errors.","authors":"Betrand Fesuh Nono, Georges Nguefack-Tsague, Martin Kegnenlezom, Eugène-Patrice N Nguéma","doi":"10.1177/09622802251316963","DOIUrl":"https://doi.org/10.1177/09622802251316963","url":null,"abstract":"<p><p>Measurement error is a prevalent issue in high-dimensional generalized linear regression that existing regularization techniques may inadequately address. Most require estimating error distributions, which can be computationally prohibitive or unrealistic. We introduce an error distribution-free approach for variable selection called the Iterative Matrix Uncertainty Selector (IMUS). IMUS employs the matrix uncertainty selector framework for linear models, which is known for its selection consistency properties. It features an efficient iterative algorithm easily implemented for any generalized linear model within the exponential family. Empirically, we demonstrate that IMUS performs well in simulations and on three microarray gene expression datasets, achieving effective covariate selection with smoother convergence and clearer elbow criteria compared to other error distribution free methods. Notably, simulation studies in logistic and Poisson regression showed that IMUS exhibited smoother convergence and clearer elbow criteria, performing comparably to the Generalized Matrix Uncertainty Selector (GMUS) and Generalized Matrix Uncertainty Lasso (GMUL) in covariate selection. In many scenarios, IMUS had smaller estimation errors than GMUL and GMUS, measured by both the 1- and 2-norms. In applications to three microarray datasets with noisy measurements, GMUS faced convergence issues, while GMUL converged but lacked well-defined elbows for two datasets. In contrast, IMUS converged with well-defined elbows for all datasets, providing a potentially effective solution for high dimensional regression problems involving measurement errors.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802251316963"},"PeriodicalIF":1.6,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143664493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apostolos Gkatzionis, Shaun R Seaman, Rachael A Hughes, Kate Tilling
{"title":"Relationship between collider bias and interactions on the log-additive scale.","authors":"Apostolos Gkatzionis, Shaun R Seaman, Rachael A Hughes, Kate Tilling","doi":"10.1177/09622802241306860","DOIUrl":"https://doi.org/10.1177/09622802241306860","url":null,"abstract":"<p><p>Collider bias occurs when conditioning on a common effect (collider) of two variables <math><mi>X</mi><mo>,</mo><mi>Y</mi></math>. In this article, we quantify the collider bias in the estimated association between exposure <math><mi>X</mi></math> and outcome <math><mi>Y</mi></math> induced by selecting on one value of a binary collider <math><mi>S</mi></math> of the exposure and the outcome. In the case of logistic regression, it is known that the magnitude of the collider bias in the exposure-outcome regression coefficient is proportional to the strength of interaction <math><msub><mi>δ</mi><mn>3</mn></msub></math> between <math><mi>X</mi></math> and <math><mi>Y</mi></math> in a log-additive model for the collider: <math><mrow><mi>P</mi></mrow><mo>(</mo><mi>S</mi><mo>=</mo><mn>1</mn><mrow><mo>|</mo></mrow><mi>X</mi><mo>,</mo><mi>Y</mi><mo>)</mo><mo>=</mo><mi>exp</mi><mspace></mspace><mrow><mo>{</mo><msub><mi>δ</mi><mn>0</mn></msub><mo>+</mo><msub><mi>δ</mi><mn>1</mn></msub><mi>X</mi><mo>+</mo><msub><mi>δ</mi><mn>2</mn></msub><mi>Y</mi><mo>+</mo><msub><mi>δ</mi><mn>3</mn></msub><mi>X</mi><mi>Y</mi><mo>}</mo></mrow></math>. We show that this result also holds under a linear or Poisson regression model for the exposure-outcome association. We then illustrate numerically that even if a log-additive model with interactions is not the true model for the collider, the interaction term in such a model is still informative about the magnitude of collider bias. Finally, we discuss the implications of these findings for methods that attempt to adjust for collider bias, such as inverse probability weighting which is often implemented without including interactions between variables in the weighting model.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802241306860"},"PeriodicalIF":1.6,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143537748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Long-term Dagum-power variance function frailty regression model: Application in health studies.","authors":"Agatha Sacramento Rodrigues, Patrick Borges","doi":"10.1177/09622802241304113","DOIUrl":"10.1177/09622802241304113","url":null,"abstract":"<p><p>Survival models with cure fractions, known as long-term survival models, are widely used in epidemiology to account for both immune and susceptible patients regarding a failure event. In such studies, it is also necessary to estimate unobservable heterogeneity caused by unmeasured prognostic factors. Moreover, the hazard function may exhibit a non-monotonic shape, specifically, an unimodal hazard function. In this article, we propose a long-term survival model based on a defective version of the Dagum distribution, incorporating a power variance function frailty term to account for unobservable heterogeneity. This model accommodates survival data with cure fractions and non-monotonic hazard functions. The distribution is reparameterized in terms of the cure fraction, with covariates linked via a logit link, allowing for direct interpretation of covariate effects on the cure fraction-an uncommon feature in defective approaches. We present maximum likelihood estimation for model parameters, assess performance through Monte Carlo simulations, and illustrate the model's applicability using two health-related datasets: severe COVID-19 in pregnant and postpartum women and patients with malignant skin neoplasms.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"407-439"},"PeriodicalIF":1.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dane Isenberg, Michael O Harhay, Nandita Mitra, Fan Li
{"title":"Weighting methods for truncation by death in cluster-randomized trials.","authors":"Dane Isenberg, Michael O Harhay, Nandita Mitra, Fan Li","doi":"10.1177/09622802241309348","DOIUrl":"10.1177/09622802241309348","url":null,"abstract":"<p><p>Patient-centered outcomes, such as quality of life and length of hospital stay, are the focus in a wide array of clinical studies. However, participants in randomized trials for elderly or critically and severely ill patient populations may have truncated or undefined non-mortality outcomes if they do not survive through the measurement time point. To address truncation by death, the survivor average causal effect has been proposed as a causally interpretable subgroup treatment effect defined under the principal stratification framework. However, the majority of methods for estimating the survivor average causal effect have been developed in the context of individually randomized trials. Only limited discussions have been centered around cluster-randomized trials, where methods typically involve strong distributional assumptions for outcome modeling. In this article, we propose two weighting methods to estimate the survivor average causal effect in cluster-randomized trials that obviate the need for potentially complicated outcome distribution modeling. We establish the requisite assumptions that address latent clustering effects to enable point identification of the survivor average causal effect, and we provide computationally efficient asymptotic variance estimators for each weighting estimator. In simulations, we evaluate our weighting estimators, demonstrating their finite-sample operating characteristics and robustness to certain departures from the identification assumptions. We illustrate our methods using data from a cluster-randomized trial to assess the impact of a sedation protocol on mechanical ventilation among children with acute respiratory failure.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"473-489"},"PeriodicalIF":1.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11951466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Jointly assessing multiple endpoints in pilot and feasibility studies.","authors":"Robert N Montgomery, Amy E Bodde, Eric D Vidoni","doi":"10.1177/09622802241311219","DOIUrl":"10.1177/09622802241311219","url":null,"abstract":"<p><p>Pilot and feasibility studies are routinely used to determine whether a definitive trial should be pursued; however, the methodologies used to assess feasibility endpoints are often basic and are rarely informed by the requirements of the planned future trial. We propose a new method for analyzing feasibility outcomes which can incorporate relationships between endpoints, utilize a preliminary study design for a future trial and allow for multiple types of feasibility endpoints. The approach specifies a Joint Feasibility Space (JFS) which is the combination of feasibility outcomes that would render a future trial feasible. We estimate the probability of being in the JFS using Bayesian methods and use simulation to create a decision rule based on frequentist operating characteristics. We compare our approach to other general-purpose methods in the literature with simulation and show that our approach has approximately the same performance when analyzing a single feasibility endpoint but is more efficient with more than one endpoint. Feasibility endpoints should be the focus of pilot and feasibility studies. The analyses of these endpoints deserve more attention than they are given, and we have provided a new, effective method their assessment.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"561-573"},"PeriodicalIF":1.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11951445/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust propensity score estimation via loss function calibration.","authors":"Yimeng Shang, Yu-Han Chiu, Lan Kong","doi":"10.1177/09622802241308709","DOIUrl":"10.1177/09622802241308709","url":null,"abstract":"<p><p>Propensity score estimation is often used as a preliminary step to estimate the average treatment effect with observational data. Nevertheless, misspecification of propensity score models undermines the validity of effect estimates in subsequent analyses. Prediction-based machine learning algorithms are increasingly used to estimate propensity scores to allow for more complex relationships between covariates. However, these approaches may not necessarily achieve covariates balancing. We propose a calibration-based method to better incorporate covariate balance properties in a general modeling framework. Specifically, we calibrate the loss function by adding a covariate imbalance penalty to standard parametric (e.g. logistic regressions) or machine learning models (e.g. neural networks). Our approach may mitigate the impact of model misspecification by explicitly taking into account the covariate balance in the propensity score estimation process. The empirical results show that the proposed method is robust to propensity score model misspecification. The integration of loss function calibration improves the balance of covariates and reduces the root-mean-square error of causal effect estimates. When the propensity score model is misspecified, the neural-network-based model yields the best estimator with less bias and smaller variance as compared to other methods considered.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"457-472"},"PeriodicalIF":1.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11951360/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143411114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}