Hua Weng, Randall Bateman, John C Morris, Chengjie Xiong
{"title":"Validity and power of minimization algorithm in longitudinal analysis of clinical trials.","authors":"Hua Weng, Randall Bateman, John C Morris, Chengjie Xiong","doi":"10.1080/24709360.2017.1331822","DOIUrl":"https://doi.org/10.1080/24709360.2017.1331822","url":null,"abstract":"<p><p>We studied the validity of longitudinal statistical inferences of clinical trials using minimization, a dynamic randomization algorithm designed to minimize treatment imbalance for prognostic factors. Repeated measures analysis of covariance and the random intercept and slope models, were used to simulate longitudinal clinical trials randomized by minimization or simple randomization. The simulations represented a wide range of analyses in real-world trials, including missing data caused by dropouts, unequal allocation of treatment arms, and efficacy analyses on either the original outcome or its change from baseline. We also analyzed the database from the Dominantly Inherited Alzheimer Network (DIAN), and used the estimated parameters to simulate the ongoing DIAN trial. Our analyses demonstrated minimization had conservative type I errors when the prognostic factor used in the minimization algorithm had a relatively strong correlation with the outcome and was not adjusted for in analyses. In contrast, adjusted tests for the prognostic factor as a covariate resulted in type I errors close to the nominal significance level. In many simulation scenarios, the adjusted tests using minimization had slightly greater statistical power than those using simple randomization, whereas in the other scenarios, the power of adjusted tests using these two randomization methods are almost indistinguishable.</p>","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"59-77"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1331822","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35665221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint modeling of longitudinal cholesterol measurements and time to onset of dementia in an elderly African American Cohort","authors":"Shanshan Li, Mengjie Zheng, Sujuan Gao","doi":"10.1080/24709360.2017.1381300","DOIUrl":"https://doi.org/10.1080/24709360.2017.1381300","url":null,"abstract":"ABSTRACT This paper presents a statistical method for analyzing the association between longitudinal cholesterol measurements and the timing of onset of dementia. The proposed approach jointly models the longitudinal and survival processes for each individual on the basis of a shared random effect, where a linear mixed effects model is assumed for the longitudinal component and an extended Cox regression model is employed for the survival component. A dynamic prediction model is built based on the joint model, which provides prediction of the conditional survival probabilities at different time points using available longitudinal measurements as well as baseline characteristics. We apply our method to the Indianapolis-Ibadan Dementia project, a 20-year study of dementia in elderly African Americans living in Indianapolis, Indiana. We find that with baseline covariates and comorbidities adjusted, the risk of dementia decreases by 1% per one mg/dl increase in total cholesterol. Therefore we conclude that, in a healthy cohort of African Americans aged 65 years or more, high late-life cholesterol level is associated with lower incidence of dementia.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"148 - 160"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1381300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45026971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using time-varying quantile regression approaches to model the influence of prenatal and infant exposures on childhood growth","authors":"Ying Wei, Xinran Ma, Xinhua Liu, M. Terry","doi":"10.1080/24709360.2017.1358137","DOIUrl":"https://doi.org/10.1080/24709360.2017.1358137","url":null,"abstract":"ABSTRACT For many applications, it is valuable to assess whether the effects of exposures over time vary by quantiles of the outcome. We have previously shown that quantile methods complement the traditional mean-based analyses, and are useful for studies of body size. Here, we extended previous work to time-varying quantile associations. Using data from over 18,000 children in the U.S. Collaborative Perinatal Project, we investigated the impact of maternal pre-pregnancy body mass index (BMI), maternal pregnancy weight gain, placental weight, and birth weight on childhood body size measured 4 times between 3 months and 7 years, using both parametric and non-parametric time-varying quantile regressions. Using our proposed model assessment tool, we found that non-parametric models fit the childhood growth data better than the parametric approaches. We also observed that quantile analysis resulted in difference inferences than the conditional mean models in three of the four constructs (maternal per-pregancy BMI, maternal weight gain, and placental weight). Overall, these results suggest the utility of applying time-varying quantile models for longitudinal outcome data. They also suggest that in the studies of body size, merely modelling the conditional mean may lead to incomplete summary of the data.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"133 - 147"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1358137","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48816696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of Concordance Probability Estimate to Predict Conversion from Mild Cognitive Impairment to Alzheimer's Disease.","authors":"Xiaoxia Han, Yilong Zhang, Yongzhao Shao","doi":"10.1080/24709360.2017.1342187","DOIUrl":"https://doi.org/10.1080/24709360.2017.1342187","url":null,"abstract":"<p><p>Subjects with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). Identifying MCI subjects who have high progression risk to AD is important in clinical management. Existing risk prediction models of AD among MCI subjects generally use either the AUC or Harrell's C-statistic to evaluate predictive accuracy. AUC is aimed at binary outcome and Harrell's C-statistic depends on the unknown censoring distribution. Gönen & Heller's K-index, also known as concordance probability estimate (CPE), is another measure of overall predictive accuracy for Cox proportional hazards (PH) models, which does not depend on censoring distribution. As a comprehensive example, using Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we built a Cox PH model to predict the conversion from MCI to AD where the prognostic accuracy was evaluated using K-index.</p>","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"105-118"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1342187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37203148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial","authors":"Xiaohong Zhou","doi":"10.1080/24709360.2016.1198464","DOIUrl":"https://doi.org/10.1080/24709360.2016.1198464","url":null,"abstract":"Dear Readers, We are delighted to announce the launch of Biostatistics & Epidemiology as the official journal of the International Biometric Society Chinese region. The International Biometric Society Chinese Region was founded in 2012, with the support of the International Biometric Society, and has grown strongly since then, becoming a focus of biostatistical and epidemiological research in China and beyond. The growth of this community has reached the point where the launch of a dedicated and top-quality peerreviewed research journal is necessary and warranted. Below, we outline the mission and scope of the Journal, along with the review process. The Journal aims to provide a platform for the dissemination of new statistical methods and the promotion of good analytical practices in biomedical investigation and epidemiology. The Journal has four main sections:","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2016.1198464","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49507957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A tutorial on kernel density estimation and recent advances","authors":"Yen-Chi Chen","doi":"10.1080/24709360.2017.1396742","DOIUrl":"https://doi.org/10.1080/24709360.2017.1396742","url":null,"abstract":"ABSTRACT This tutorial provides a gentle introduction to kernel density estimation (KDE) and recent advances regarding confidence bands and geometric/topological features. We begin with a discussion of basic properties of KDE: the convergence rate under various metrics, density derivative estimation, and bandwidth selection. Then, we introduce common approaches to the construction of confidence intervals/bands, and we discuss how to handle bias. Next, we talk about recent advances in the inference of geometric and topological features of a density function using KDE. Finally, we illustrate how one can use KDE to estimate a cumulative distribution function and a receiver operating characteristic curve. We provide R implementations related to this tutorial at the end.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"161 - 187"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1396742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41909385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of progressive multi-state models with misclassified states: likelihood and pairwise likelihood methods","authors":"G. Yi, Wenqing He, Feng He","doi":"10.1080/24709360.2017.1359356","DOIUrl":"https://doi.org/10.1080/24709360.2017.1359356","url":null,"abstract":"ABSTRACT Multi-state models are commonly used in studies of disease progression. Methods developed under this framework, however, are often challenged by misclassification in states. In this article, we investigate issues concerning continuous-time progressive multi-state models with state misclassification. We develop inference methods using both the likelihood and pairwise likelihood methods that are based on joint modelling of the progressive and misclassification processes. We assess the performance of the proposed methods by simulation studies, and illustrate their use by the application to the data arising from a coronary allograft vasculopathy study.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"119 - 132"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1359356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48115978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjie Lou, Lijie Wan, Erin L Abner, David W Fardo, Hiroko H Dodge, Richard J Kryscio
{"title":"Multi-state models and missing covariate data: Expectation-Maximization algorithm for likelihood estimation.","authors":"Wenjie Lou, Lijie Wan, Erin L Abner, David W Fardo, Hiroko H Dodge, Richard J Kryscio","doi":"10.1080/24709360.2017.1306156","DOIUrl":"https://doi.org/10.1080/24709360.2017.1306156","url":null,"abstract":"<p><p>Multi-state models have been widely used to analyze longitudinal event history data obtained in medical and epidemiological studies. The tools and methods developed recently in this area require completely observed data. However, missing data within variables of interest is very common in practice, and it has been an issue in applications. We propose a type of EM algorithm, which handles missingness within multiple binary covariates efficiently, for multi-state model applications. Simulation studies show that the EM algorithm performs well for both missing completely at random (MCAR) and missing at random (MAR) covariate data. We apply the method to a longitudinal aging and cognition study dataset, the Klamath Exceptional Aging Project (KEAP), whose data were collected at Oregon Health & Science University and integrated into the Statistical Models of Aging and Risk of Transition (SMART) database at the University of Kentucky.</p>","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"20-35"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1306156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35961876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengjie Xiong, Jingqin Luo, John C Morris, Randall Bateman
{"title":"Linear Combinations of Multiple Outcome Measures to Improve the Power of Efficacy Analysis ---Application to Clinical Trials on Early Stage Alzheimer Disease.","authors":"Chengjie Xiong, Jingqin Luo, John C Morris, Randall Bateman","doi":"10.1080/24709360.2017.1331821","DOIUrl":"https://doi.org/10.1080/24709360.2017.1331821","url":null,"abstract":"<p><p>Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial.</p>","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"1 1","pages":"36-58"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2017.1331821","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35919411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}