{"title":"A Bayesian quasi-likelihood design for identifying the minimum effective dose and maximum utility dose in dose-ranging studies","authors":"Feng Tian, Ruitao Lin, Li Wang, Ying Yuan","doi":"10.1177/09622802241239268","DOIUrl":"https://doi.org/10.1177/09622802241239268","url":null,"abstract":"Most existing dose-ranging study designs focus on assessing the dose–efficacy relationship and identifying the minimum effective dose. There is an increasing interest in optimizing the dose based on the benefit–risk tradeoff. We propose a Bayesian quasi-likelihood dose-ranging design that jointly considers safety and efficacy to simultaneously identify the minimum effective dose and the maximum utility dose to optimize the benefit–risk tradeoff. The binary toxicity endpoint is modeled using a beta-binomial model. The efficacy endpoint is modeled using the quasi-likelihood approach to accommodate various types of data (e.g. binary, ordinal or continuous) without imposing any parametric assumptions on the dose–response curve. Our design utilizes a utility function as a measure of benefit–risk tradeoff and adaptively assign patients to doses based on the doses’ likelihood of being the minimum effective dose and maximum utility dose. The design takes a group-sequential approach. At each interim, the doses that are deemed overly toxic or futile are dropped. At the end of the trial, we use posterior probability criteria to assess the strength of the dose–response relationship for establishing the proof-of-concept. If the proof-of-concept is established, we identify the minimum effective dose and maximum utility dose. Our simulation study shows that compared with some existing designs, the Bayesian quasi-likelihood dose-ranging design is robust and yields competitive performance in establishing proof-of-concept and selecting the minimum effective dose. Moreover, it includes an additional feature for further maximum utility dose selection.","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Isotonic design for single-arm biomarker stratified trials","authors":"Lang Li, Anastasia Ivanova","doi":"10.1177/09622802241238978","DOIUrl":"https://doi.org/10.1177/09622802241238978","url":null,"abstract":"In single-arm trials with a predefined subgroup based on baseline biomarkers, it is often assumed that a biomarker defined subgroup, the biomarker positive subgroup, has the same or higher response to treatment compared to its complement, the biomarker negative subgroup. The goal is to determine if the treatment is effective in each of the subgroups or in the biomarker positive subgroup only or not effective at all. We propose the isotonic stratified design for this problem. The design has a joint set of decision rules for biomarker positive and negative subjects and utilizes joint estimation of response probabilities using assumed monotonicity of response between the biomarker negative and positive subgroups. The new design reduces the sample size requirement when compared to running two Simon's designs in each biomarker positive and negative. For example, the new design requires 23%–35% fewer patients than running two Simon's designs for scenarios we considered. Alternatively, the new design allows evaluating the response probability in both biomarker negative and biomarker positive subgroups using only 40% more patients needed for running Simon's design in the biomarker positive subgroup only.","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":"52 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140580875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eldon Sorensen, Jacob Oleson, Ethan Kutlu, Bob McMurray
{"title":"A Bayesian hierarchical model for the analysis of visual analogue scaling tasks","authors":"Eldon Sorensen, Jacob Oleson, Ethan Kutlu, Bob McMurray","doi":"10.1177/09622802241242319","DOIUrl":"https://doi.org/10.1177/09622802241242319","url":null,"abstract":"In psychophysics and psychometrics, an integral method to the discipline involves charting how a person’s response pattern changes according to a continuum of stimuli. For instance, in hearing science, Visual Analog Scaling tasks are experiments in which listeners hear sounds across a speech continuum and give a numeric rating between 0 and 100 conveying whether the sound they heard was more like word “a” or more like word “b” (i.e. each participant is giving a continuous categorization response). By taking all the continuous categorization responses across the speech continuum, a parametric curve model can be fit to the data and used to analyze any individual’s response pattern by speech continuum. Standard statistical modeling techniques are not able to accommodate all of the specific requirements needed to analyze these data. Thus, Bayesian hierarchical modeling techniques are employed to accommodate group-level non-linear curves, individual-specific non-linear curves, continuum-level random effects, and a subject-specific variance that is predicted by other model parameters. In this paper, a Bayesian hierarchical model is constructed to model the data from a Visual Analog Scaling task study of mono-lingual and bi-lingual participants. Any nonlinear curve function could be used and we demonstrate the technique using the 4-parameter logistic function. Overall, the model was found to fit particularly well to the data from the study and results suggested that the magnitude of the slope was what most defined the differences in response patterns between continua.","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":"51 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140580991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bryan S Blette, Scott D Halpern, Fan Li, Michael O Harhay
{"title":"Assessing treatment effect heterogeneity in the presence of missing effect modifier data in cluster-randomized trials","authors":"Bryan S Blette, Scott D Halpern, Fan Li, Michael O Harhay","doi":"10.1177/09622802241242323","DOIUrl":"https://doi.org/10.1177/09622802241242323","url":null,"abstract":"Understanding whether and how treatment effects vary across subgroups is crucial to inform clinical practice and recommendations. Accordingly, the assessment of heterogeneous treatment effects based on pre-specified potential effect modifiers has become a common goal in modern randomized trials. However, when one or more potential effect modifiers are missing, complete-case analysis may lead to bias and under-coverage. While statistical methods for handling missing data have been proposed and compared for individually randomized trials with missing effect modifier data, few guidelines exist for the cluster-randomized setting, where intracluster correlations in the effect modifiers, outcomes, or even missingness mechanisms may introduce further threats to accurate assessment of heterogeneous treatment effect. In this article, the performance of several missing data methods are compared through a simulation study of cluster-randomized trials with continuous outcome and missing binary effect modifier data, and further illustrated using real data from the Work, Family, and Health Study. Our results suggest that multilevel multiple imputation and Bayesian multilevel multiple imputation have better performance than other available methods, and that Bayesian multilevel multiple imputation has lower bias and closer to nominal coverage than standard multilevel multiple imputation when there are model specification or compatibility issues.","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":"50 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erina Paul, Bibhas Chakraborty, Alla Sikorskii, Samiran Ghosh
{"title":"A framework for testing non-inferiority in a three-arm, sequential, multiple assignment randomized trial.","authors":"Erina Paul, Bibhas Chakraborty, Alla Sikorskii, Samiran Ghosh","doi":"10.1177/09622802241232124","DOIUrl":"10.1177/09622802241232124","url":null,"abstract":"<p><p>Sequential multiple assignment randomized trial design is becoming increasingly used in the field of precision medicine. This design allows comparisons of sequences of adaptive interventions tailored to the individual patient. Superiority testing is usually the initial goal in order to determine which embedded adaptive intervention yields the best primary outcome on average. When direct superiority is not evident, yet an adaptive intervention poses other benefits, then non-inferiority testing is warranted. Non-inferiority testing in the sequential multiple assignment randomized trial setup is rather new and involves the specification of non-inferiority margin and other important assumptions that are often unverifiable internally. These challenges are not specific to sequential multiple assignment randomized trial and apply to two-arm non-inferiority trials that do not include a standard-of-care (or placebo) arm. To address some of these challenges, three-arm non-inferiority trials that include the standard-of-care arm are proposed. However, methods developed so far for three-arm non-inferiority trials are not sequential multiple assignment randomized trial-specific. This is because apart from embedded adaptive interventions, sequential multiple assignment randomized trial typically does not include a third standard-of-care arm. In this article, we consider a three-arm sequential multiple assignment randomized trial from an National Institutes of Health-funded study of symptom management strategies among people undergoing cancer treatment. Motivated by that example, we propose a novel data analytic method for non-inferiority testing in the framework of three-arm sequential multiple assignment randomized trial for the first time. Sample size and power considerations are discussed through extensive simulation studies to elucidate our method.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"611-633"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139940816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Regression analysis of longitudinal data with random change point.","authors":"Peng Zhang, Xuerong Chen, Jianguo Sun","doi":"10.1177/09622802241232125","DOIUrl":"10.1177/09622802241232125","url":null,"abstract":"<p><p>A great deal of literature has been established for regression analysis of longitudinal data and in particular, many methods have been proposed for the situation where there exist some change points. However, most of these methods only apply to continuous response and focus on the situations where the change point only occurs on the response or the trend of the individual trajectory. In this article, we propose a new joint modeling approach that allows not only the change point to vary for different subjects or be subject-specific but also the effect heterogeneity of the covariates before and after the change point. The method combines a generalized linear mixed effect model with a random change point for the longitudinal response and a log-linear regression model for the random change point. For inference, a maximum likelihood estimation procedure is developed and the asymptotic properties of the resulting estimators, which differ from the standard asymptotic results, are established. A simulation study is conducted and suggests that the proposed method works well for practical situations. An application to a set of real data on COVID-19 is provided.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"634-646"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139940819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Statistical inference for diagnostic test accuracy studies with multiple comparisons.","authors":"Max Westphal, Antonia Zapf","doi":"10.1177/09622802241236933","DOIUrl":"10.1177/09622802241236933","url":null,"abstract":"<p><p>Diagnostic accuracy studies assess the sensitivity and specificity of a new index test in relation to an established comparator or the reference standard. The development and selection of the index test are usually assumed to be conducted prior to the accuracy study. In practice, this is often violated, for instance, if the choice of the (apparently) best biomarker, model or cutpoint is based on the same data that is used later for validation purposes. In this work, we investigate several multiple comparison procedures which provide family-wise error rate control for the emerging multiple testing problem. Due to the nature of the co-primary hypothesis problem, conventional approaches for multiplicity adjustment are too conservative for the specific problem and thus need to be adapted. In an extensive simulation study, five multiple comparison procedures are compared with regard to statistical error rates in least-favourable and realistic scenarios. This covers parametric and non-parametric methods and one Bayesian approach. All methods have been implemented in the new open-source R package cases which allows us to reproduce all simulation results. Based on our numerical results, we conclude that the parametric approaches (maxT and Bonferroni) are easy to apply but can have inflated type I error rates for small sample sizes. The two investigated Bootstrap procedures, in particular the so-called pairs Bootstrap, allow for a family-wise error rate control in finite samples and in addition have a competitive statistical power.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"669-680"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11025299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140137236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-validation approaches for penalized Cox regression.","authors":"Biyue Dai, Patrick Breheny","doi":"10.1177/09622802241233770","DOIUrl":"10.1177/09622802241233770","url":null,"abstract":"<p><p>Cross-validation is the most common way of selecting tuning parameters in penalized regression, but its use in penalized Cox regression models has received relatively little attention in the literature. Due to its partial likelihood construction, carrying out cross-validation for Cox models is not straightforward, and there are several potential approaches for implementation. Here, we propose a new approach based on cross-validating the linear predictors of the Cox model and compare it to approaches that have been proposed elsewhere. We show that the proposed approach offers an attractive balance of performance and numerical stability, and illustrate these advantages using simulated data as well as analyzing a high-dimensional study of gene expression and survival in lung cancer patients.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"702-715"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandrine Boulet, Moreno Ursino, Robin Michelet, Linda Bs Aulin, Charlotte Kloft, Emmanuelle Comets, Sarah Zohar
{"title":"Bayesian framework for multi-source data integration-Application to human extrapolation from preclinical studies.","authors":"Sandrine Boulet, Moreno Ursino, Robin Michelet, Linda Bs Aulin, Charlotte Kloft, Emmanuelle Comets, Sarah Zohar","doi":"10.1177/09622802241231493","DOIUrl":"10.1177/09622802241231493","url":null,"abstract":"<p><p>In preclinical investigations, for example, in in vitro, in vivo, and in silico studies, the pharmacokinetic, pharmacodynamic, and toxicological characteristics of a drug are evaluated before advancing to first-in-man trial. Usually, each study is analyzed independently and the human dose range does not leverage the knowledge gained from all studies. Taking into account all preclinical data through inferential procedures can be particularly interesting in obtaining a more precise and reliable starting dose and dose range. Our objective is to propose a Bayesian framework for multi-source data integration, customizable, and tailored to the specific research question. We focused on preclinical results extrapolated to humans, which allowed us to predict the quantities of interest (e.g. maximum tolerated dose, etc.) in humans. We build an approach, divided into four steps, based on a sequential parameter estimation for each study, extrapolation to human, commensurability checking between posterior distributions and final information merging to increase the precision of estimation. The new framework is evaluated via an extensive simulation study, based on a real-life example in oncology. Our approach allows us to better use all the information compared to a standard framework, reducing uncertainty in the predictions and potentially leading to a more efficient dose selection.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"574-588"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bang Wang, Yu Cheng, Mitchell H Gail, Jason Fine, Ruth M Pfeiffer
{"title":"Predicting absolute risk for a person with missing risk factors.","authors":"Bang Wang, Yu Cheng, Mitchell H Gail, Jason Fine, Ruth M Pfeiffer","doi":"10.1177/09622802241227945","DOIUrl":"10.1177/09622802241227945","url":null,"abstract":"<p><p>We compared methods to project absolute risk, the probability of experiencing the outcome of interest in a given projection interval accommodating competing risks, for a person from the target population with missing predictors. Without missing data, a perfectly calibrated model gives unbiased absolute risk estimates in a new target population, even if the predictor distribution differs from the training data. However, if predictors are missing in target population members, a reference dataset with complete data is needed to impute them and to estimate absolute risk, conditional only on the observed predictors. If the predictor distributions of the reference data and the target population differ, this approach yields biased estimates. We compared the bias and mean squared error of absolute risk predictions for seven methods that assume predictors are missing at random (MAR). Some methods imputed individual missing predictors, others imputed linear predictor combinations (risk scores). Simulations were based on real breast cancer predictor distributions and outcome data. We also analyzed a real breast cancer dataset. The largest bias for all methods resulted from different predictor distributions of the reference and target populations. No method was unbiased in this situation. Surprisingly, violating the MAR assumption did not induce severe biases. Most multiple imputation methods performed similarly and were less biased (but more variable) than a method that used a single expected risk score. Our work shows the importance of selecting predictor reference datasets similar to the target population to reduce bias of absolute risk predictions with missing risk factors.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"557-573"},"PeriodicalIF":2.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139997445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}