{"title":"Propensity Score Analysis With Baseline and Follow-Up Measurements of the Outcome Variable.","authors":"Peter C Austin","doi":"10.1002/pst.2436","DOIUrl":"10.1002/pst.2436","url":null,"abstract":"<p><p>A common feature in cohort studies is when there is a baseline measurement of the continuous follow-up or outcome variable. Common examples include baseline measurements of physiological characteristics such as blood pressure or heart rate in studies where the outcome is post-baseline measurement of the same variable. Methods incorporating the propensity score are increasingly being used to estimate the effects of treatments using observational studies. We examined six methods for incorporating the baseline value of the follow-up variable when using propensity score matching or weighting. These methods differed according to whether the baseline value of the follow-up variable was included or excluded from the propensity score model, whether subsequent regression adjustment was conducted in the matched or weighted sample to adjust for the baseline value of the follow-up variable, and whether the analysis estimated the effect of treatment on the follow-up variable or on the change from baseline. We used Monte Carlo simulations with 750 scenarios. While no analytic method had uniformly superior performance, we provide the following recommendations: first, when using weighting and the ATE is the target estimand, use an augmented inverse probability weighted estimator or include the baseline value of the follow-up variable in the propensity score model and subsequently adjust for the baseline value of the follow-up variable in a regression model. Second, when the ATT is the target estimand, regardless of whether using weighting or matching, analyze change from baseline using a propensity score that excludes the baseline value of the follow-up variable.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2436"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142140774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hang Li, Tomasz M Witkos, Scott Umlauf, Christopher Thompson
{"title":"Potency Assay Variability Estimation in Practice.","authors":"Hang Li, Tomasz M Witkos, Scott Umlauf, Christopher Thompson","doi":"10.1002/pst.2408","DOIUrl":"10.1002/pst.2408","url":null,"abstract":"<p><p>During the drug development process, testing potency plays an important role in the quality assessment required for the manufacturing and marketing of biologics. Due to multiple operational and biological factors, higher variability is usually observed in bioassays compared with physicochemical methods. In this paper, we discuss different sources of bioassay variability and how this variability can be statistically estimated. In addition, we propose an algorithm to estimate the variability of reportable results associated with different numbers of runs and their corresponding OOS rates under a given specification. Numerical experiments are conducted on multiple assay formats to elucidate the empirical distribution of bioassay variability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"e2408"},"PeriodicalIF":1.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141559471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Covariate adjustment and estimation of difference in proportions in randomized clinical trials.","authors":"Jialuo Liu, Dong Xi","doi":"10.1002/pst.2397","DOIUrl":"10.1002/pst.2397","url":null,"abstract":"<p><p>Difference in proportions is frequently used to measure treatment effect for binary outcomes in randomized clinical trials. The estimation of difference in proportions can be assisted by adjusting for prognostic baseline covariates to enhance precision and bolster statistical power. Standardization or g-computation is a widely used method for covariate adjustment in estimating unconditional difference in proportions, because of its robustness to model misspecification. Various inference methods have been proposed to quantify the uncertainty and confidence intervals based on large-sample theories. However, their performances under small sample sizes and model misspecification have not been comprehensively evaluated. We propose an alternative approach to estimate the unconditional variance of the standardization estimator based on the robust sandwich estimator to further enhance the finite sample performance. Extensive simulations are provided to demonstrate the performances of the proposed method, spanning a wide range of sample sizes, randomization ratios, and model specification. We apply the proposed method in a real data example to illustrate the practical utility.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"884-905"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141065823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Drury, Juan J Abellan, Nicky Best, Ian R White
{"title":"Estimation of Treatment Policy Estimands for Continuous Outcomes Using Off-Treatment Sequential Multiple Imputation.","authors":"Thomas Drury, Juan J Abellan, Nicky Best, Ian R White","doi":"10.1002/pst.2411","DOIUrl":"10.1002/pst.2411","url":null,"abstract":"<p><p>The estimands framework outlined in ICH E9 (R1) describes the components needed to precisely define the effects to be estimated in clinical trials, which includes how post-baseline 'intercurrent' events (IEs) are to be handled. In late-stage clinical trials, it is common to handle IEs like 'treatment discontinuation' using the treatment policy strategy and target the treatment effect on outcomes regardless of treatment discontinuation. For continuous repeated measures, this type of effect is often estimated using all observed data before and after discontinuation using either a mixed model for repeated measures (MMRM) or multiple imputation (MI) to handle any missing data. In basic form, both these estimation methods ignore treatment discontinuation in the analysis and therefore may be biased if there are differences in patient outcomes after treatment discontinuation compared with patients still assigned to treatment, and missing data being more common for patients who have discontinued treatment. We therefore propose and evaluate a set of MI models that can accommodate differences between outcomes before and after treatment discontinuation. The models are evaluated in the context of planning a Phase 3 trial for a respiratory disease. We show that analyses ignoring treatment discontinuation can introduce substantial bias and can sometimes underestimate variability. We also show that some of the MI models proposed can successfully correct the bias, but inevitably lead to increases in variance. We conclude that some of the proposed MI models are preferable to the traditional analysis ignoring treatment discontinuation, but the precise choice of MI model will likely depend on the trial design, disease of interest and amount of observed and missing data following treatment discontinuation.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1144-1155"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G M Hair, T Jemielita, S Mt-Isa, P M Schnell, R Baumgartner
{"title":"Investigating Stability in Subgroup Identification for Stratified Medicine.","authors":"G M Hair, T Jemielita, S Mt-Isa, P M Schnell, R Baumgartner","doi":"10.1002/pst.2409","DOIUrl":"10.1002/pst.2409","url":null,"abstract":"<p><p>Subgroup analysis may be used to investigate treatment effect heterogeneity among subsets of the study population defined by baseline characteristics. Several methodologies have been proposed in recent years and with these, statistical issues such as multiplicity, complexity, and selection bias have been widely discussed. Some methods adjust for one or more of these issues; however, few of them discuss or consider the stability of the subgroup assignments. We propose exploring the stability of subgroups as a sensitivity analysis step for stratified medicine to assess the robustness of the identified subgroups besides identifying possible factors that may drive this instability. After applying Bayesian credible subgroups, a nonparametric bootstrap can be used to assess stability at subgroup-level and patient-level. Our findings illustrate that when the treatment effect is small or not so evident, patients are more likely to switch to different subgroups (jumpers) across bootstrap resamples. In contrast, when the treatment effect is large or extremely convincing, patients generally remain in the same subgroup. While the proposed subgroup stability method is illustrated through Bayesian credible subgroups method on time-to-event data, this general approach can be used with other subgroup identification methods and endpoints.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"945-958"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141458676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Futility Interim Analysis Based on Probability of Success Using a Surrogate Endpoint.","authors":"Ronan Fougeray, Loïck Vidot, Marco Ratta, Zhaoyang Teng, Donia Skanji, Gaëlle Saint-Hilary","doi":"10.1002/pst.2410","DOIUrl":"10.1002/pst.2410","url":null,"abstract":"<p><p>In clinical trials with time-to-event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long-term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision-making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical \"final\" endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility-stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior-data conflicts while maintaining acceptable performance even in the presence of significant prior-data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression-free survival as the surrogate endpoint.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"971-983"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141492960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survival Analysis Without Sharing of Individual Patient Data by Using a Gaussian Copula.","authors":"Federico Bonofiglio","doi":"10.1002/pst.2415","DOIUrl":"10.1002/pst.2415","url":null,"abstract":"<p><p>Cox regression and Kaplan-Meier estimations are often needed in clinical research and this requires access to individual patient data (IPD). However, IPD cannot always be shared because of privacy or proprietary restrictions, which complicates the making of such estimations. We propose a method that generates pseudodata replacing the IPD by only sharing non-disclosive aggregates such as IPD marginal moments and a correlation matrix. Such aggregates are collected by a central computer and input as parameters to a Gaussian copula (GC) that generates the pseudodata. Survival inferences are computed on the pseudodata as if it were the IPD. Using practical examples we demonstrate the utility of the method, via the amount of IPD inferential content recoverable by the GC. We compare GC to a summary-based meta-analysis and an IPD bootstrap distributed across several centers. Other pseudodata approaches are also considered. In the empirical results, GC approximates the utility of the IPD bootstrap although it might yield more conservative inferences and it might have limitations in subgroup analyses. Overall, GC avoids many legal problems related to IPD privacy or property while enabling approximation of common IPD survival analyses otherwise difficult to conduct. Sharing more IPD aggregates than is currently practiced could facilitate \"second purpose\"-research and relax concerns regarding IPD access.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1031-1044"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J C Poythress, Jin Hyung Lee, Kentaro Takeda, Jun Liu
{"title":"Bayesian Methods for Quality Tolerance Limit (QTL) Monitoring.","authors":"J C Poythress, Jin Hyung Lee, Kentaro Takeda, Jun Liu","doi":"10.1002/pst.2427","DOIUrl":"10.1002/pst.2427","url":null,"abstract":"<p><p>In alignment with the ICH guideline for Good Clinical Practice [ICH E6(R2)], quality tolerance limit (QTL) monitoring has become a standard component of risk-based monitoring of clinical trials by sponsor companies. Parameters that are candidates for QTL monitoring are critical to participant safety and quality of trial results. Breaching the QTL of a given parameter could indicate systematic issues with the trial that could impact participant safety or compromise the reliability of trial results. Methods for QTL monitoring should detect potential QTL breaches as early as possible while limiting the rate of false alarms. Early detection allows for the implementation of remedial actions that can prevent a QTL breach at the end of the trial. We demonstrate that statistically based methods that account for the expected value and variability of the data generating process outperform simple methods based on fixed thresholds with respect to important operating characteristics. We also propose a Bayesian method for QTL monitoring and an extension that allows for the incorporation of partial information, demonstrating its potential to outperform frequentist methods originating from the statistical process control literature.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1166-1180"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyungwoo Kim, Seungpil Jung, Yudi Pawitan, Woojoo Lee
{"title":"Reparametrized Firth's Logistic Regressions for Dose-Finding Study With the Biased-Coin Design.","authors":"Hyungwoo Kim, Seungpil Jung, Yudi Pawitan, Woojoo Lee","doi":"10.1002/pst.2423","DOIUrl":"10.1002/pst.2423","url":null,"abstract":"<p><p>Finding an adequate dose of the drug by revealing the dose-response relationship is very crucial and a challenging problem in the clinical development. The main concerns in dose-finding study are to identify a minimum effective dose (MED) in anesthesia studies and maximum tolerated dose (MTD) in oncology clinical trials. For the estimation of MED and MTD, we propose two modifications of Firth's logistic regression using reparametrization, called reparametrized Firth's logistic regression (rFLR) and ridge-penalized reparametrized Firth's logistic regression (RrFLR). The proposed methods are designed by directly reducing the small-sample bias of the maximum likelihood estimate for the parameter of interest. In addition, we develop a method on how to construct confidence intervals for rFLR and RrFLR using profile penalized likelihood. In the up-and-down biased-coin design, numerical studies confirm the superior performance of the proposed methods in terms of the mean squared error, bias, and coverage accuracy of confidence intervals.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"1117-1127"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Cut-Point Selection Methods Under Binary Classification When Subclasses Are Involved.","authors":"Jia Wang, Lili Tian","doi":"10.1002/pst.2413","DOIUrl":"10.1002/pst.2413","url":null,"abstract":"<p><p>In practice, we often encounter binary classification problems where both main classes consist of multiple subclasses. For example, in an ovarian cancer study where biomarkers were evaluated for their accuracy of distinguishing noncancer cases from cancer cases, the noncancer class consists of healthy subjects and benign cases, while the cancer class consists of subjects at both early and late stages. This article aims to provide a large number of optimal cut-point selection methods for such setting. Furthermore, we also study confidence interval estimation of the optimal cut-points. Simulation studies are carried out to explore the performance of the proposed cut-point selection methods as well as confidence interval estimation methods. A real ovarian cancer data set is analyzed using the proposed methods.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"984-1030"},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}