{"title":"Targeting the Optimal Design in Randomized Clinical Trials with Binary Outcomes and No Covariate: Simulation Study","authors":"A. Chambaz, M. J. van der Laan","doi":"10.2202/1557-4679.1310","DOIUrl":"https://doi.org/10.2202/1557-4679.1310","url":null,"abstract":"We undertake here a comprehensive simulation study of the theoretical properties that we derive in a companion article devoted to the asymptotic study of adaptive group sequential designs in the case of randomized clinical trials (RCTs) with binary treatment, binary outcome and no covariate. By adaptive design, we mean in this setting a RCT design that allows the investigator to dynamically modify its course through data-driven adjustment of the randomization probability based on data accrued so far without negatively impacting on the statistical integrity of the trial. By adaptive group sequential design, we refer to the fact that group sequential testing methods can be equally well applied on top of adaptive designs. The simulation study validates the theory. It notably shows in the estimation framework that the confidence intervals we obtain achieve the desired coverage even for moderate sample sizes. In addition, it shows in the testing framework that type I error control at the prescribed level is guaranteed and that all sampling procedures only suffer from a very slight increase of the type II error. A three-sentence take-home message is “Adaptive designs do learn the targeted optimal design and inference and testing can be carried out under adaptive sampling as they would under the targeted optimal randomization probability iid sampling. In particular, adaptive designs achieve the same efficiency as the fixed oracle design. This is confirmed by a simulation study, at least for moderate or large sample sizes, across a large collection of targeted randomization probabilities.”","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2011-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1310","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68718288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relative Risk Estimation in Randomized Controlled Trials: A Comparison of Methods for Independent Observations","authors":"L. Yelland, A. Salter, Philip Ryan","doi":"10.2202/1557-4679.1278","DOIUrl":"https://doi.org/10.2202/1557-4679.1278","url":null,"abstract":"The relative risk is a clinically important measure of the effect of treatment on binary outcomes in randomized controlled trials (RCTs). An adjusted relative risk can be estimated using log binomial regression; however, convergence problems are common with this model. While alternative methods have been proposed for estimating relative risks, comparisons between methods have been limited, particularly in the context of RCTs. We compare ten different methods for estimating relative risks under a variety of scenarios relevant to RCTs with independent observations. Results of a large simulation study show that some methods may fail to overcome the convergence problems of log binomial regression, while others may substantially overestimate the treatment effect or produce inaccurate confidence intervals. Further, conclusions about the effectiveness of treatment may differ depending on the method used. We give recommendations for choosing a method for estimating relative risks in the context of RCTs with independent observations.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2011-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1278","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Saavedra, A. Santana-del-Pino, C. N. Hernández-Flores, J. Artiles-Romero, J. J. González-Henríquez
{"title":"Classification of Stationary Signals with Mixed Spectrum","authors":"P. Saavedra, A. Santana-del-Pino, C. N. Hernández-Flores, J. Artiles-Romero, J. J. González-Henríquez","doi":"10.2202/1557-4679.1288","DOIUrl":"https://doi.org/10.2202/1557-4679.1288","url":null,"abstract":"This paper deals with the problem of discrimination between two sets of complex signals generated by stationary processes with both random effects and mixed spectral distributions. The presence of outlier signals and their influence on the classification process is also considered. As an initial input, a feature vector obtained from estimations of the spectral distribution is proposed and used with two different learning machines, namely a single artificial neural network and the LogitBoost classifier. Performance of both methods is evaluated on five simulation studies as well as on a set of actual data of electroencephalogram (EEG) records obtained from both normal subjects and others having experienced epileptic seizures. Of the different classification methods, Logitboost is shown to be more robust to the presence of outlier signals.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2011-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Improved Bland-Altman Method for Concordance Assessment","authors":"Jason J. Z. Liao, R. Capen","doi":"10.2202/1557-4679.1295","DOIUrl":"https://doi.org/10.2202/1557-4679.1295","url":null,"abstract":"It is often necessary to compare two measurement methods in medicine and other experimental sciences. This problem covers a broad range of data with applications arising from many different fields. The Bland-Altman method has been a favorite method for concordance assessment. However, the Bland-Altman approach creates a problem of interpretation for many applications when a mixture of fixed bias, proportional bias and/or proportional error occurs. In this paper, an improved Bland-Altman method is proposed to handle more complicated scenarios in practice. This new approach includes Bland-Altman's approach as its special case. We evaluate concordance by defining an agreement interval for each individual paired observation and assessing the overall concordance. The proposed interval approach is very informative and offers many advantages over existing approaches. Data sets are used to demonstrate the advantages of the new method.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2011-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1295","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Dunnett-Type Procedure for Multiple Endpoints","authors":"M. Hasler, L. Hothorn","doi":"10.2202/1557-4679.1258","DOIUrl":"https://doi.org/10.2202/1557-4679.1258","url":null,"abstract":"This paper describes a method for comparisons of several treatments with a control, simultaneously for multiple endpoints. These endpoints are assumed to be normally distributed with different scales and variances. An approximate multivariate t-distribution is used to obtain quantiles for test decisions, multiplicity-adjusted p-values, and simultaneous confidence intervals. Simulation results show that this approach controls the family-wise error type I over both the comparisons and the endpoints in an admissible range. The approach will be applied to a randomized clinical trial comparing two new sets of extracorporeal circulations with a standard for three primary endpoints. A related R package is available.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2011-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rejoinder to Nancy Cook's Comment on \"Measures to Summarize and Compare the Predictive Capacity of Markers\"","authors":"M. Pepe","doi":"10.2202/1557-4679.1280","DOIUrl":"https://doi.org/10.2202/1557-4679.1280","url":null,"abstract":"This is a response to Nancy Cook's Readers' Reaction to \"Measures to Summarize and Compare the Predictive Capacity of Markers.\"","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2010-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1280","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68718007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content","authors":"Liliana Orellana, A. Rotnitzky, J. Robins","doi":"10.2202/1557-4679.1200","DOIUrl":"https://doi.org/10.2202/1557-4679.1200","url":null,"abstract":"Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2010-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Targeting the Optimal Design in Randomized Clinical Trials with Binary Outcomes and No Covariate: Theoretical Study","authors":"A. Chambaz, M. J. van der Laan","doi":"10.2202/1557-4679.1247","DOIUrl":"https://doi.org/10.2202/1557-4679.1247","url":null,"abstract":"This article is devoted to the asymptotic study of adaptive group sequential designs in the case of randomized clinical trials (RCTs) with binary treatment, binary outcome and no covariate. By adaptive design, we mean in this setting a RCT design that allows the investigator to dynamically modify its course through data-driven adjustment of the randomization probability based on data accrued so far, without negatively impacting on the statistical integrity of the trial. By adaptive group sequential design, we refer to the fact that group sequential testing methods can be equally well applied on top of adaptive designs. We obtain that, theoretically, the adaptive design converges almost surely to the targeted unknown randomization scheme. In the estimation framework, we obtain that our maximum likelihood estimator of the parameter of interest is a strongly consistent estimator, and it satisfies a central limit theorem. We can estimate its asymptotic variance, which is the same as that it would feature had we known in advance the targeted randomization scheme and independently sampled from it. Consequently, inference can be carried out as if we had resorted to independent and identically distributed (iid) sampling. In the testing framework, we obtain that the multidimensional t-statistic that we would use under iid sampling still converges to the same canonical distribution under adaptive sampling. Consequently, the same group sequential testing can be carried out as if we had resorted to iid sampling. Furthermore, a comprehensive simulation study that we undertake in a companion article validates the theory. A three-sentence take-home message is “Adaptive designs do learn the targeted optimal design and inference, and testing can be carried out under adaptive sampling as they would under the targeted optimal randomization probability iid sampling. In particular, adaptive designs achieve the same efficiency as the fixed oracle design. This is confirmed by a simulation study, at least for moderate or large sample sizes, across a large collection of targeted randomization probabilities.'”","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2010-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1247","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Targeted Maximum Likelihood Based Causal Inference: Part I","authors":"M. J. van der Laan","doi":"10.2202/1557-4679.1211","DOIUrl":"https://doi.org/10.2202/1557-4679.1211","url":null,"abstract":"Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions.In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is consistent if either the initial estimator is consistent, or the estimator of the optimal fluctuation function is consistent. The optimal fluctuation function is correctly specified if the conditional distributions of the nodes in the causal graph one intervenes upon are correctly specified. The latter conditional distributions often comprise the so called treatment and censoring mechanism. Selection among different targeted maximum likelihood estimators (e.g., indexed by different initial estimators) can be based on loss-based cross-validation such as likelihood based cross-validation or cross-validation based on another appropriate loss function for the distribution of the data. Some specific loss functions are mentioned in this article.Subsequently, a variety of interesting observations about this targeted maximum likelihood estimation procedure are made. This article provides the basis for the subsequent companion Part II-article in which concrete demonstrations for the implementation of the ","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling Cumulative Incidences of Dementia and Dementia-Free Death Using a Novel Three-Parameter Logistic Function","authors":"Y. Cheng","doi":"10.2202/1557-4679.1183","DOIUrl":"https://doi.org/10.2202/1557-4679.1183","url":null,"abstract":"Parametric modeling of univariate cumulative incidence functions and logistic models have been studied extensively. However, to the best of our knowledge, there is no study using logistic models to characterize cumulative incidence functions. In this paper, we propose a novel parametric model which is an extension of a widely-used four-parameter logistic function for dose-response curves. The modified model can accommodate various shapes of cumulative incidence functions and be easily implemented using standard statistical software. The simulation studies demonstrate that the proposed model is as efficient as or more efficient than its nonparametric counterpart when it is correctly specified, and outperforms the existing Gompertz model when the underlying cumulative incidence function is sigmoidal. The practical utility of the modified three-parameter logistic model is illustrated using the data from the Cache County Study of dementia.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"5 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}