{"title":"Balance diagnostics in propensity score analysis following multiple imputation: A new method","authors":"Sevinc Puren Yucel Karakaya, Ilker Unal","doi":"10.1002/pst.2389","DOIUrl":"https://doi.org/10.1002/pst.2389","url":null,"abstract":"The combination of propensity score analysis and multiple imputation has been prominent in epidemiological research in recent years. However, studies on the evaluation of balance in this combination are limited. In this paper, we propose a new method for assessing balance in propensity score analysis following multiple imputation. A simulation study was conducted to evaluate the performance of balance assessment methods (Leyrat's, Leite's, and new method). Simulated scenarios varied regarding the presence of missing data in the control or treatment and control group, and the imputation model with/without outcome. Leyrat's method was more biased in all the studied scenarios. Leite's method and the combine method yielded balanced results with lower mean absolute difference, regardless of whether the outcome was included in the imputation model or not. Leyrat's method had a higher false positive ratio and Leite's and combine method had higher specificity and accuracy, especially when the outcome was not included in the imputation model. According to simulation results, most of time, Leyrat's method and Leite's method contradict with each other on appraising the balance. This discrepancy can be solved using new combine method.","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"241 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A practical guide to the appropriate analysis of eGFR data over time: A simulation study","authors":"Todd DeVries, Kevin J. Carroll, Sandra A. Lewis","doi":"10.1002/pst.2381","DOIUrl":"https://doi.org/10.1002/pst.2381","url":null,"abstract":"In several therapeutic areas, including chronic kidney disease (CKD) and immunoglobulin A nephropathy (IgAN), there is a growing interest in how best to analyze estimated glomerular filtration rate (eGFR) data over time in randomized clinical trials including how to best accommodate situations where the rate of change is not anticipated to be linear over time, often due to possible short term hemodynamic effects of certain classes of interventions. In such situations, concerns have been expressed by regulatory authorities that the common application of single slope analysis models may induce Type I error inflation. This article aims to offer practical advice and guidance, including SAS codes, on the statistical methodology to be employed in an eGFR rate of change analysis and offers guidance on trial design considerations for eGFR endpoints. A two‐slope statistical model for eGFR data over time is proposed allowing for an analysis to simultaneously evaluate short term acute effects and long term chronic effects. A simulation study was conducted under a range of credible null and alternative hypotheses to evaluate the performance of the two‐slope model in comparison to commonly used single slope random coefficients models as well as to non‐slope based analyses of change from baseline or time normalized area under the curve (TAUC). Importantly, and contrary to preexisting concerns, these simulations demonstrate the absence of alpha inflation associated with the use of single or two‐slope random coefficient models, even when such models are misspecified, and highlight that any concern regarding model misspecification relates to power and not to lack of Type I error control.","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":"2013 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas
{"title":"Synergy detection: A practical guide to statistical assessment of potential drug combinations.","authors":"Elli Makariadou, Xuechen Wang, Nicholas Hein, Negera W Deresa, Kathy Mutambanengwe, Bie Verbist, Olivier Thas","doi":"10.1002/pst.2383","DOIUrl":"https://doi.org/10.1002/pst.2383","url":null,"abstract":"<p><p>Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140336499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang
{"title":"Statistical approaches to evaluate in vitro dissolution data against proposed dissolution specifications.","authors":"Fasheng Li, Beverly Nickerson, Les Van Alstine, Ke Wang","doi":"10.1002/pst.2379","DOIUrl":"https://doi.org/10.1002/pst.2379","url":null,"abstract":"<p><p>In vitro dissolution testing is a regulatory required critical quality measure for solid dose pharmaceutical drug products. Setting the acceptance criteria to meet compendial criteria is required for a product to be filed and approved for marketing. Statistical approaches for analyzing dissolution data, setting specifications and visualizing results could vary according to product requirements, company's practices, and scientific judgements. This paper provides a general description of the steps taken in the evaluation and setting of in vitro dissolution specifications at release and on stability.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140143994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti
{"title":"A dynamic power prior approach to non-inferiority trials for normal means.","authors":"Francesco Mariani, Fulvio De Santis, Stefania Gubbiotti","doi":"10.1002/pst.2349","DOIUrl":"10.1002/pst.2349","url":null,"abstract":"<p><p>Non-inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"242-256"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107591987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequentist and Bayesian tolerance intervals for setting specification limits for left-censored gamma distributed drug quality attributes.","authors":"Richard O Montes","doi":"10.1002/pst.2344","DOIUrl":"10.1002/pst.2344","url":null,"abstract":"<p><p>Tolerance intervals from quality attribute measurements are used to establish specification limits for drug products. Some attribute measurements may be below the reporting limits, that is, left-censored data. When data has a long, right-skew tail, a gamma distribution may be applicable. This paper compares maximum likelihood estimation (MLE) and Bayesian methods to estimate shape and scale parameters of censored gamma distributions and to calculate tolerance intervals under varying sample sizes and extents of censoring. The noninformative reference prior and the maximal data information prior (MDIP) are used to compare the impact of prior choice. Metrics used are bias and root mean square error for the parameter estimation and average length and confidence coefficient for the tolerance interval evaluation. It will be shown that Bayesian method using a reference prior overall performs better than MLE for the scenarios evaluated. When sample size is small, the Bayesian method using MDIP yields conservatively too wide tolerance intervals that are unsuitable basis for specification setting. The metrics for all methods worsened with increasing extent of censoring but improved with increasing sample size, as expected. This study demonstrates that although MLE is relatively simple and available in user-friendly statistical software, it falls short in accurately and precisely producing tolerance limits that maintain the stated confidence depending on the scenario. The Bayesian method using noninformative prior, even though computationally intensive and requires considerable statistical programming, produces tolerance limits which are practically useful for specification setting. Real-world examples are provided to illustrate the findings from the simulation study.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"168-184"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49691915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probability of success and group sequential designs.","authors":"Andrew P Grieve","doi":"10.1002/pst.2346","DOIUrl":"10.1002/pst.2346","url":null,"abstract":"<p><p>In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"185-203"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71425793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anders Granholm, Theis Lange, Michael O Harhay, Aksel Karl Georg Jensen, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen
{"title":"Effects of duration of follow-up and lag in data collection on the performance of adaptive clinical trials.","authors":"Anders Granholm, Theis Lange, Michael O Harhay, Aksel Karl Georg Jensen, Anders Perner, Morten Hylander Møller, Benjamin Skov Kaas-Hansen","doi":"10.1002/pst.2342","DOIUrl":"10.1002/pst.2342","url":null,"abstract":"<p><p>Different combined outcome-data lags (follow-up durations plus data-collection lags) may affect the performance of adaptive clinical trial designs. We assessed the influence of different outcome-data lags (0-105 days) on the performance of various multi-stage, adaptive trial designs (2/4 arms, with/without a common control, fixed/response-adaptive randomisation) with undesirable binary outcomes according to different inclusion rates (3.33/6.67/10 patients/day) under scenarios with no, small, and large differences. Simulations were conducted under a Bayesian framework, with constant stopping thresholds for superiority/inferiority calibrated to keep type-1 error rates at approximately 5%. We assessed multiple performance metrics, including mean sample sizes, event counts/probabilities, probabilities of conclusiveness, root mean squared errors (RMSEs) of the estimated effect in the selected arms, and RMSEs between the analyses at the time of stopping and the final analyses including data from all randomised patients. Performance metrics generally deteriorated when the proportions of randomised patients with available data were smaller due to longer outcome-data lags or faster inclusion, that is, mean sample sizes, event counts/probabilities, and RMSEs were larger, while the probabilities of conclusiveness were lower. Performance metric impairments with outcome-data lags ≤45 days were relatively smaller compared to those occurring with ≥60 days of lag. For most metrics, the effects of different outcome-data lags and lower proportions of randomised patients with available data were larger than those of different design choices, for example, the use of fixed versus response-adaptive randomisation. Increased outcome-data lag substantially affected the performance of adaptive trial designs. Trialists should consider the effects of outcome-data lags when planning adaptive trials.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"138-150"},"PeriodicalIF":1.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10935606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41208637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto García-Hernandez, Teresa Pérez, María Del Carmen Pardo, Dimitris Rizopoulos
{"title":"An illness-death multistate model to implement delta adjustment and reference-based imputation with time-to-event endpoints.","authors":"Alberto García-Hernandez, Teresa Pérez, María Del Carmen Pardo, Dimitris Rizopoulos","doi":"10.1002/pst.2348","DOIUrl":"10.1002/pst.2348","url":null,"abstract":"<p><p>With a treatment policy strategy, therapies are evaluated regardless of the disturbance caused by intercurrent events (ICEs). Implementing this estimand is challenging if subjects are not followed up after the ICE. This circumstance can be dealt with using delta adjustment (DA) or reference-based (RB) imputation. In the survival field, DA and RB imputation have been researched so far using multiple imputation (MI). Here, we present a fully analytical solution. We use the illness-death multistate model with the following transitions: (a) from the initial state to the event of interest, (b) from the initial state to the ICE, and (c) from the ICE to the event. We estimate the intensity function of transitions (a) and (b) using flexible parametric survival models. Transition (c) is assumed unobserved but identifiable using DA or RB imputation assumptions. Various rules have been considered: no ICE effect, DA under proportional hazards (PH) or additive hazards (AH), jump to reference (J2R), and (either PH or AH) copy increment from reference. We obtain the marginal survival curve of interest by calculating, via numerical integration, the probability of transitioning from the initial state to the event of interest regardless of having passed or not by the ICE state. We use the delta method to obtain standard errors (SEs). Finally, we quantify the performance of the proposed estimator through simulations and compare it against MI. Our analytical solution is more efficient than MI and avoids SE misestimation-a known phenomenon associated with Rubin's variance equation.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"219-241"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71522338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandra A Lewis, Kevin J Carroll, Todd DeVries, Jonathan Barratt
{"title":"Conditional power and information fraction calculations at an interim analysis for random coefficient models.","authors":"Sandra A Lewis, Kevin J Carroll, Todd DeVries, Jonathan Barratt","doi":"10.1002/pst.2345","DOIUrl":"10.1002/pst.2345","url":null,"abstract":"<p><p>Random coefficient (RC) models are commonly used in clinical trials to estimate the rate of change over time in longitudinal data. Trials utilizing a surrogate endpoint for accelerated approval with a confirmatory longitudinal endpoint to show clinical benefit is a strategy implemented across various therapeutic areas, including immunoglobulin A nephropathy. Understanding conditional power (CP) and information fraction calculations of RC models may help in the design of clinical trials as well as provide support for the confirmatory endpoint at the time of accelerated approval. This paper provides calculation methods, with practical examples, for determining CP at an interim analysis for a RC model with longitudinal data, such as estimated glomerular filtration rate (eGFR) assessments to measure rate of change in eGFR slope.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"276-283"},"PeriodicalIF":1.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71425792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}