{"title":"Bayesian Response Adaptive Randomization for Randomized Clinical Trials With Continuous Outcomes: The Role of Covariate Adjustment.","authors":"Vahan Aslanyan, Trevor Pickering, Michelle Nuño, Lindsay A Renfro, Judy Pa, Wendy J Mack","doi":"10.1002/pst.2443","DOIUrl":"10.1002/pst.2443","url":null,"abstract":"<p><p>Study designs incorporate interim analyses to allow for modifications to the trial design. These analyses may aid decisions regarding sample size, futility, and safety. Furthermore, they may provide evidence about potential differences between treatment arms. Bayesian response adaptive randomization (RAR) skews allocation proportions such that fewer participants are assigned to the inferior treatments. However, these allocation changes may introduce covariate imbalances. We discuss two versions of Bayesian RAR (with and without covariate adjustment for a binary covariate) for continuous outcomes analyzed using change scores and repeated measures, while considering either regression or mixed models for interim analysis modeling. Through simulation studies, we show that RAR (both versions) allocates more participants to better treatments compared to equal randomization, while reducing potential covariate imbalances. We also show that dynamic allocation using mixed models for repeated measures yields a smaller allocation proportion variance while having a similar covariate imbalance as regression models. Additionally, covariate imbalance was smallest for methods using covariate-adjusted RAR (CARA) in scenarios with small sample sizes and covariate prevalence less than 0.3. Covariate imbalance did not differ between RAR and CARA in simulations with larger sample sizes and higher covariate prevalence. We thus recommend a CARA approach for small pilot/exploratory studies for the identification of candidate treatments for further confirmatory studies.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2443"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142505735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Sechidis, Sophie Sun, Yao Chen, Jiarui Lu, Cong Zhang, Mark Baillie, David Ohlssen, Marc Vandemeulebroecke, Rob Hemmings, Stephen Ruberg, Björn Bornkamp
{"title":"WATCH: A Workflow to Assess Treatment Effect Heterogeneity in Drug Development for Clinical Trial Sponsors.","authors":"Konstantinos Sechidis, Sophie Sun, Yao Chen, Jiarui Lu, Cong Zhang, Mark Baillie, David Ohlssen, Marc Vandemeulebroecke, Rob Hemmings, Stephen Ruberg, Björn Bornkamp","doi":"10.1002/pst.2463","DOIUrl":"10.1002/pst.2463","url":null,"abstract":"<p><p>This article proposes a Workflow for Assessing Treatment effeCt Heterogeneity (WATCH) in clinical drug development targeted at clinical trial sponsors. WATCH is designed to address the challenges of investigating treatment effect heterogeneity (TEH) in randomized clinical trials, where sample size and multiplicity limit the reliability of findings. The proposed workflow includes four steps: analysis planning, initial data analysis and analysis dataset creation, TEH exploration, and multidisciplinary assessment. The workflow offers a general overview of how treatment effects vary by baseline covariates in the observed data and guides the interpretation of the observed findings based on external evidence and the best scientific understanding. The workflow is exploratory and not inferential/confirmatory in nature but should be preplanned before database lock and analysis start. It is focused on providing a general overview rather than a single specific finding or subgroup with a differential effect.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2463"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142896375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Success and Futility Criteria for Accelerated Approval of Oncology Drugs.","authors":"Dong Xi, Jiangtao Gou","doi":"10.1002/pst.70004","DOIUrl":"10.1002/pst.70004","url":null,"abstract":"<p><p>Project FrontRunner encourages development of cancer drugs for advanced or metastatic disease in an earlier clinical setting by promoting regulatory approaches such as the accelerated approval pathway. The FDA draft guideline proposes a one-trial approach to combine accelerated approval and regular approval in a single trial to maintain efficiency. This article describes our idea of controlling Type I error for accelerated and regular approvals in the one-trial approach. We introduce success and futility boundaries on p-values for accelerated approval to create three outcomes: success, RA, and futility. If success, accelerated approval can be claimed for; for RA, only regular approval (RA) is considered; if futility, we stop the trial early for futility. For both success and RA, the endpoint for regular approval can be tested with no penalty on its significance level. The proposed approach is robust to all possible values of correlation between test statistics of the endpoints for accelerated and regular approvals. This framework is flexible to allow clinical trial teams to tailor success and futility boundaries to meet clinical and regulatory needs, while maintaining the overall Type I error control in the strong sense.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 2","pages":"e70004"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143567868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pre-Posterior Distributions in Drug Development and Their Properties.","authors":"Andrew P Grieve","doi":"10.1002/pst.2450","DOIUrl":"10.1002/pst.2450","url":null,"abstract":"<p><p>The topic of this article is pre-posterior distributions of success or failure. These distributions, determined before a study is run and based on all our assumptions, are what we should believe about the treatment effect if we are told only that the study has been successful, or unsuccessful. I show how the pre-posterior distributions of success and failure can be used during the planning phase of a study to investigate whether the study is able to discriminate between effective and ineffective treatments. I show how these distributions are linked to the probability of success (PoS), or failure, and how they can be determined from simulations if standard asymptotic normality assumptions are inappropriate. I show the link to the concept of the conditional <math> <semantics><mrow><mi>P</mi> <mi>o</mi> <mi>S</mi></mrow> <annotation>$$ P o S $$</annotation></semantics> </math> introduced by Temple and Robertson in the context of the planning of multiple studies. Finally, I show that they can also be constructed regardless of whether the analysis of the study is frequentist or fully Bayesian.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2450"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142716661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Subject Level Covariate Information in Bayesian Mixture Models for Basket Trials.","authors":"Sneha Govande, Elizabeth H Slate","doi":"10.1002/pst.70006","DOIUrl":"10.1002/pst.70006","url":null,"abstract":"<p><p>Basket trials are gaining importance with advancements in precision medicine. A basket trial evaluates one or more treatments for efficacy among more than one cancer type (histology) in a single clinical trial. Compared to traditional designs, basket trials can reduce the time required for testing and, by pooling across cancer types, they also allow the drugs to be tested for rare cancers. However, the potential for heterogeneity in treatment efficacy in different cancer types poses modeling challenges. Our model aims to assist the cancer type level go/no-go decisions in the initial phases of the trial through a latent cluster structure that incorporates subject-level covariate information. We model subjects' responses using a Bayesian mixture model where the mixture weights depend on a measure of similarly among subjects' covariate values. A simulation study demonstrates that our proposed Bayesian Partition Model with Covariates (BPMx) robustly estimates basket-level mean response and can provide insight about the latent cluster structure. We further illustrate the model using response data from a published basket trial.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 2","pages":"e70006"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143664210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dries De Witte, Ariel Alonso Abad, Diane Stephenson, Yashmin Karten, Antoine Leuzy, Gregory Klein, Geert Molenberghs
{"title":"A Federated Data Analysis Approach for the Evaluation of Surrogate Endpoints.","authors":"Dries De Witte, Ariel Alonso Abad, Diane Stephenson, Yashmin Karten, Antoine Leuzy, Gregory Klein, Geert Molenberghs","doi":"10.1002/pst.70003","DOIUrl":"10.1002/pst.70003","url":null,"abstract":"<p><p>In clinical trials, surrogate endpoints, that are more cost-effective, occur earlier, or are more frequently measured, are sometimes used to replace costly, late, or rare true endpoints. Regulatory authorities typically require thorough evaluation and validation to accept these surrogate endpoints as reliable substitutes. To this end, the meta-analytic framework is considered a very viable approach to validate surrogates at both trial and individual levels. However, this framework requires data from multiple trials or centers, posing challenges when data sharing is not feasible. In this article, we propose a federated data analysis approach that allows organizations to maintain control over their datasets while still enabling surrogate validation through meta-analytic techniques. In this approach, there is no longer a need for raw data sharing. Instead, independent analyses are conducted at each organization. Thereafter, the results of these independent analyses are aggregated at a central analysis hub and the metrics for surrogate evaluation are extracted. We apply this approach to simulated and real clinical data, demonstrating how this federated approach can overcome data-sharing constraints and validate surrogate endpoints in decentralized settings.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 2","pages":"e70003"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143503171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Treatment Effect Measures Under Nonproportional Hazards.","authors":"Dan Jackson, Michael Sweeting, Rose Baker","doi":"10.1002/pst.2449","DOIUrl":"10.1002/pst.2449","url":null,"abstract":"<p><p>'Treatment effect measures under nonproportional hazards' by Snapinn et al. (Pharmaceutical Statistics, 22, 181-193) recently proposed some novel estimates of treatment effect for time-to-event endpoints. In this note, we clarify three points related to the proposed estimators that help to elucidate their properties. We hope that their work, and this commentary, will motivate further discussion concerning treatment effect measures that do not require the proportional hazards assumption.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2449"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142505738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Douglas Faries, Chenyin Gao, Xiang Zhang, Chad Hazlett, James Stamey, Shu Yang, Peng Ding, Mingyang Shan, Kristin Sheffield, Nancy Dreyer
{"title":"Real Effect or Bias? Good Practices for Evaluating the Robustness of Evidence From Comparative Observational Studies Through Quantitative Sensitivity Analysis for Unmeasured Confounding.","authors":"Douglas Faries, Chenyin Gao, Xiang Zhang, Chad Hazlett, James Stamey, Shu Yang, Peng Ding, Mingyang Shan, Kristin Sheffield, Nancy Dreyer","doi":"10.1002/pst.2457","DOIUrl":"10.1002/pst.2457","url":null,"abstract":"<p><p>The assumption of \"no unmeasured confounders\" is a critical but unverifiable assumption required for causal inference yet quantitative sensitivity analyses to assess robustness of real-world evidence remains under-utilized. The lack of use is likely in part due to complexity of implementation and often specific and restrictive data requirements for application of each method. With the advent of methods that are broadly applicable in that they do not require identification of a specific unmeasured confounder-along with publicly available code for implementation-roadblocks toward broader use of sensitivity analyses are decreasing. To spur greater application, here we offer a good practice guidance to address the potential for unmeasured confounding at both the design and analysis stages, including framing questions and an analytic toolbox for researchers. The questions at the design stage guide the researcher through steps evaluating the potential robustness of the design while encouraging gathering of additional data to reduce uncertainty due to potential confounding. At the analysis stage, the questions guide quantifying the robustness of the observed result and providing researchers with a clearer indication of the strength of their conclusions. We demonstrate the application of this guidance using simulated data based on an observational fibromyalgia study, applying multiple methods from our analytic toolbox for illustration purposes.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2457"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142771284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James Bell, Thomas Drury, Tobias Mütze, Christian Bressen Pipper, Lorenzo Guizzaro, Marian Mitroiu, Khadija Rerhou Rantell, Marcel Wolbers, David Wright
{"title":"Estimation Methods for Estimands Using the Treatment Policy Strategy; a Simulation Study Based on the PIONEER 1 Trial.","authors":"James Bell, Thomas Drury, Tobias Mütze, Christian Bressen Pipper, Lorenzo Guizzaro, Marian Mitroiu, Khadija Rerhou Rantell, Marcel Wolbers, David Wright","doi":"10.1002/pst.2472","DOIUrl":"10.1002/pst.2472","url":null,"abstract":"<p><p>Estimands using the treatment policy strategy for addressing intercurrent events are common in Phase III clinical trials. One estimation approach for this strategy is retrieved dropout whereby observed data following an intercurrent event are used to multiply impute missing data. However, such methods have had issues with variance inflation and model fitting due to data sparsity. This paper introduces likelihood-based versions of these approaches, investigating and comparing their statistical properties to the existing retrieved dropout approaches, simpler analysis models and reference-based multiple imputation. We use a simulation based upon the data from the PIONEER 1 Phase III clinical trial in Type II diabetics to present complex and relevant estimation challenges. The likelihood-based methods display similar statistical properties to their multiple imputation equivalents, but all retrieved dropout approaches suffer from high variance. Retrieved dropout approaches appear less biased than reference-based approaches, resulting in a bias-variance trade-off, but we conclude that the large degree of variance inflation is often more problematic than the bias. Therefore, only the simpler retrieved dropout models appear appropriate as a primary analysis in a clinical trial, and only where it is believed most data following intercurrent events will be observed. The jump-to-reference approach may represent a more promising estimation approach for symptomatic treatments due to its relatively high power and ability to fit in the presence of much missing data, despite its strong assumptions and tendency toward conservative bias. More research is needed to further develop how to estimate the treatment effect for a treatment policy strategy.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 2","pages":"e2472"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143567864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Average Hazard as Harmonic Mean.","authors":"Yasutaka Chiba","doi":"10.1002/pst.70009","DOIUrl":"10.1002/pst.70009","url":null,"abstract":"<p><p>A new measure was recently developed in the context of survival analysis that can be interpreted as a weighted arithmetic mean of the hazards with the survival function as the weight. However, when the average hazard is desired, it is more appropriate to use the harmonic mean rather than the arithmetic mean. Therefore, in this article, we derive the average hazard as a harmonic mean version of the expectation for hazards and show it to be equal to the previous weighted arithmetic mean. Furthermore, we demonstrate that the average hazard should be estimated using only the times at which the event is observed, while previous studies have allowed estimating the average hazard even when the truncation time is set to a time at which the event is not observed.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"24 2","pages":"e70009"},"PeriodicalIF":1.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11893520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143597534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}