{"title":"Futility Interim Analysis Based on Probability of Success Using a Surrogate Endpoint.","authors":"Ronan Fougeray, Loïck Vidot, Marco Ratta, Zhaoyang Teng, Donia Skanji, Gaëlle Saint-Hilary","doi":"10.1002/pst.2410","DOIUrl":"10.1002/pst.2410","url":null,"abstract":"<p><p>In clinical trials with time-to-event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long-term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision-making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical \"final\" endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility-stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior-data conflicts while maintaining acceptable performance even in the presence of significant prior-data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression-free survival as the surrogate endpoint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"971-983"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141492960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survival Analysis Without Sharing of Individual Patient Data by Using a Gaussian Copula.","authors":"Federico Bonofiglio","doi":"10.1002/pst.2415","DOIUrl":"10.1002/pst.2415","url":null,"abstract":"<p><p>Cox regression and Kaplan-Meier estimations are often needed in clinical research and this requires access to individual patient data (IPD). However, IPD cannot always be shared because of privacy or proprietary restrictions, which complicates the making of such estimations. We propose a method that generates pseudodata replacing the IPD by only sharing non-disclosive aggregates such as IPD marginal moments and a correlation matrix. Such aggregates are collected by a central computer and input as parameters to a Gaussian copula (GC) that generates the pseudodata. Survival inferences are computed on the pseudodata as if it were the IPD. Using practical examples we demonstrate the utility of the method, via the amount of IPD inferential content recoverable by the GC. We compare GC to a summary-based meta-analysis and an IPD bootstrap distributed across several centers. Other pseudodata approaches are also considered. In the empirical results, GC approximates the utility of the IPD bootstrap although it might yield more conservative inferences and it might have limitations in subgroup analyses. Overall, GC avoids many legal problems related to IPD privacy or property while enabling approximation of common IPD survival analyses otherwise difficult to conduct. Sharing more IPD aggregates than is currently practiced could facilitate \"second purpose\"-research and relax concerns regarding IPD access.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1031-1044"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J C Poythress, Jin Hyung Lee, Kentaro Takeda, Jun Liu
{"title":"Bayesian Methods for Quality Tolerance Limit (QTL) Monitoring.","authors":"J C Poythress, Jin Hyung Lee, Kentaro Takeda, Jun Liu","doi":"10.1002/pst.2427","DOIUrl":"10.1002/pst.2427","url":null,"abstract":"<p><p>In alignment with the ICH guideline for Good Clinical Practice [ICH E6(R2)], quality tolerance limit (QTL) monitoring has become a standard component of risk-based monitoring of clinical trials by sponsor companies. Parameters that are candidates for QTL monitoring are critical to participant safety and quality of trial results. Breaching the QTL of a given parameter could indicate systematic issues with the trial that could impact participant safety or compromise the reliability of trial results. Methods for QTL monitoring should detect potential QTL breaches as early as possible while limiting the rate of false alarms. Early detection allows for the implementation of remedial actions that can prevent a QTL breach at the end of the trial. We demonstrate that statistically based methods that account for the expected value and variability of the data generating process outperform simple methods based on fixed thresholds with respect to important operating characteristics. We also propose a Bayesian method for QTL monitoring and an extension that allows for the incorporation of partial information, demonstrating its potential to outperform frequentist methods originating from the statistical process control literature.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1166-1180"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyungwoo Kim, Seungpil Jung, Yudi Pawitan, Woojoo Lee
{"title":"Reparametrized Firth's Logistic Regressions for Dose-Finding Study With the Biased-Coin Design.","authors":"Hyungwoo Kim, Seungpil Jung, Yudi Pawitan, Woojoo Lee","doi":"10.1002/pst.2423","DOIUrl":"10.1002/pst.2423","url":null,"abstract":"<p><p>Finding an adequate dose of the drug by revealing the dose-response relationship is very crucial and a challenging problem in the clinical development. The main concerns in dose-finding study are to identify a minimum effective dose (MED) in anesthesia studies and maximum tolerated dose (MTD) in oncology clinical trials. For the estimation of MED and MTD, we propose two modifications of Firth's logistic regression using reparametrization, called reparametrized Firth's logistic regression (rFLR) and ridge-penalized reparametrized Firth's logistic regression (RrFLR). The proposed methods are designed by directly reducing the small-sample bias of the maximum likelihood estimate for the parameter of interest. In addition, we develop a method on how to construct confidence intervals for rFLR and RrFLR using profile penalized likelihood. In the up-and-down biased-coin design, numerical studies confirm the superior performance of the proposed methods in terms of the mean squared error, bias, and coverage accuracy of confidence intervals.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1117-1127"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Cut-Point Selection Methods Under Binary Classification When Subclasses Are Involved.","authors":"Jia Wang, Lili Tian","doi":"10.1002/pst.2413","DOIUrl":"10.1002/pst.2413","url":null,"abstract":"<p><p>In practice, we often encounter binary classification problems where both main classes consist of multiple subclasses. For example, in an ovarian cancer study where biomarkers were evaluated for their accuracy of distinguishing noncancer cases from cancer cases, the noncancer class consists of healthy subjects and benign cases, while the cancer class consists of subjects at both early and late stages. This article aims to provide a large number of optimal cut-point selection methods for such setting. Furthermore, we also study confidence interval estimation of the optimal cut-points. Simulation studies are carried out to explore the performance of the proposed cut-point selection methods as well as confidence interval estimation methods. A real ovarian cancer data set is analyzed using the proposed methods.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"984-1030"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr
{"title":"Visualizing hypothesis tests in survival analysis under anticipated delayed effects.","authors":"José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr","doi":"10.1002/pst.2393","DOIUrl":"10.1002/pst.2393","url":null,"abstract":"<p><p>What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point <math><mrow><mi>τ</mi></mrow> </math> -a mathematically unambiguous summary measure. However, by emphasizing differences prior to <math><mrow><mi>τ</mi></mrow> </math> , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"870-883"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using an early outcome as the sole source of information of interim decisions regarding treatment effect on a long-term endpoint: The non-Gaussian case.","authors":"Leandro Garcia Barrado, Tomasz Burzykowski","doi":"10.1002/pst.2398","DOIUrl":"10.1002/pst.2398","url":null,"abstract":"<p><p>In randomized clinical trials that use a long-term efficacy endpoint, the follow-up time necessary to observe the endpoint may be substantial. In such trials, an attractive option is to consider an interim analysis based solely on an early outcome that could be used to expedite the evaluation of treatment's efficacy. Garcia Barrado et al. (Pharm Stat. 2022; 21: 209-219) developed a methodology that allows introducing such an early interim analysis for the case when both the early outcome and the long-term endpoint are normally-distributed, continuous variables. We extend the methodology to any combination of the early-outcome and long-term-endpoint types. As an example, we consider the case of a binary outcome and a time-to-event endpoint. We further evaluate the potential gain in operating characteristics (power, expected trial duration, and expected sample size) of a trial with such an interim analysis in function of the properties of the early outcome as a surrogate for the long-term endpoint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"928-938"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141261163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A model-assisted design for partially or completely ordered groups.","authors":"Connor Celum, Mark Conaway","doi":"10.1002/pst.2396","DOIUrl":"10.1002/pst.2396","url":null,"abstract":"<p><p>This paper proposes a trial design for locating group-specific doses when groups are partially or completely ordered by dose sensitivity. Previous trial designs for partially ordered groups are model-based, whereas the proposed method is model-assisted, providing clinicians with a design that is simpler. The proposed method performs similarly to model-based methods, providing simplicity without losing accuracy. Additionally, to the best of our knowledge, the proposed method is the first paper on dose-finding for partially ordered groups with convergence results. To generalize the proposed method, a framework is introduced that allows partial orders to be transferred to a grid format with a known ordering across rows but an unknown ordering within rows.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"906-927"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141071600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variable Duration Trial as an Alternative Design for Continuous Endpoints.","authors":"Jitendra Ganju, Julie Guoguang Ma","doi":"10.1002/pst.2418","DOIUrl":"10.1002/pst.2418","url":null,"abstract":"<p><p>Clinical trials with continuous primary endpoints typically measure outcomes at baseline, at a fixed timepoint (denoted T <sub>min</sub>), and at intermediate timepoints. The analysis is commonly performed using the mixed model repeated measures method. It is sometimes expected that the effect size will be larger with follow-up longer than T <sub>min</sub>. But extending the follow-up for all patients delays trial completion. We propose an alternative trial design and analysis method that potentially increases statistical power without extending the trial duration or increasing the sample size. We propose following the last enrolled patient until T <sub>min</sub>, with earlier enrollees having variable follow-up durations up to a maximum of T <sub>max</sub>. The sample size at T <sub>max</sub> will be smaller than at T <sub>min</sub>, and due to staggered enrollment, data missing at T <sub>max</sub> will be missing completely at random. For analysis, we propose an alpha-adjusted procedure based on the smaller of the p values at T <sub>min</sub> and T <sub>max</sub>, termed <math> <semantics><mrow><mtext>minP</mtext></mrow> </semantics> </math> . This approach can provide the highest power when the powers at T <sub>min</sub> and T <sub>max</sub> are similar. If the power at T <sub>min</sub> and T <sub>max</sub> differ significantly, the power of <math> <semantics><mrow><mtext>minP</mtext></mrow> </semantics> </math> is modestly reduced compared with the larger of the two powers. Rare disease trials, due to the limited size of the patient population, may benefit the most with this design.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1059-1064"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141591009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sample Size Estimation Using a Partially Clustered Frailty Model for Biomarker-Strategy Designs With Multiple Treatments.","authors":"Derek Dinart, Virginie Rondeau, Carine Bellera","doi":"10.1002/pst.2407","DOIUrl":"10.1002/pst.2407","url":null,"abstract":"<p><p>Biomarker-guided therapy is a growing area of research in medicine. To optimize the use of biomarkers, several study designs including the biomarker-strategy design (BSD) have been proposed. Unlike traditional designs, the emphasis here is on comparing treatment strategies and not on treatment molecules as such. Patients are assigned to either a biomarker-based strategy (BBS) arm, in which biomarker-positive patients receive an experimental treatment that targets the identified biomarker, or a non-biomarker-based strategy (NBBS) arm, in which patients receive treatment regardless of their biomarker status. We proposed a simulation method based on a partially clustered frailty model (PCFM) as well as an extension of Freidlin formula to estimate the sample size required for BSD with multiple targeted treatments. The sample size was mainly influenced by the heterogeneity of treatment effect, the proportion of biomarker-negative patients, and the randomization ratio. The PCFM is well suited for the data structure and offers an alternative to traditional methodologies.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1084-1094"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}