{"title":"An introduction to instrumental variable assumptions, validation and estimation.","authors":"Mette Lise Lousdal","doi":"10.1186/s12982-018-0069-7","DOIUrl":"https://doi.org/10.1186/s12982-018-0069-7","url":null,"abstract":"<p><p>The instrumental variable method has been employed within economics to infer causality in the presence of unmeasured confounding. Emphasising the parallels to randomisation may increase understanding of the underlying assumptions within epidemiology. An instrument is a variable that predicts exposure, but conditional on exposure shows no independent association with the outcome. The random assignment in trials is an example of what would be expected to be an ideal instrument, but instruments can also be found in observational settings with a naturally varying phenomenon e.g. geographical variation, physical distance to facility or physician's preference. The fourth identifying assumption has received less attention, but is essential for the generalisability of estimated effects. The instrument identifies the group of <i>compliers</i> in which exposure is pseudo-randomly assigned leading to exchangeability with regard to unmeasured confounders. Underlying assumptions can only partially be tested empirically and require subject-matter knowledge. Future studies employing instruments should carefully seek to validate all four assumptions, possibly drawing on parallels to randomisation.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2018-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-018-0069-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35782943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple imputation using linked proxy outcome data resulted in important bias reduction and efficiency gains: a simulation study.","authors":"R P Cornish, J Macleod, J R Carpenter, K Tilling","doi":"10.1186/s12982-017-0068-0","DOIUrl":"https://doi.org/10.1186/s12982-017-0068-0","url":null,"abstract":"<p><strong>Background: </strong>When an outcome variable is missing not at random (MNAR: probability of missingness depends on outcome values), estimates of the effect of an exposure on this outcome are often biased. We investigated the extent of this bias and examined whether the bias can be reduced through incorporating proxy outcomes obtained through linkage to administrative data as auxiliary variables in multiple imputation (MI).</p><p><strong>Methods: </strong>Using data from the Avon Longitudinal Study of Parents and Children (ALSPAC) we estimated the association between breastfeeding and IQ (continuous outcome), incorporating linked attainment data (proxies for IQ) as auxiliary variables in MI models. Simulation studies explored the impact of varying the proportion of missing data (from 20 to 80%), the correlation between the outcome and its proxy (0.1-0.9), the strength of the missing data mechanism, and having a proxy variable that was incomplete.</p><p><strong>Results: </strong>Incorporating a linked proxy for the missing outcome as an auxiliary variable reduced bias and increased efficiency in all scenarios, even when 80% of the outcome was missing. Using an incomplete proxy was similarly beneficial. High correlations (> 0.5) between the outcome and its proxy substantially reduced the missing information. Consistent with this, ALSPAC analysis showed inclusion of a proxy reduced bias and improved efficiency. Gains with additional proxies were modest.</p><p><strong>Conclusions: </strong>In longitudinal studies with loss to follow-up, incorporating proxies for this study outcome obtained via linkage to external sources of data as auxiliary variables in MI models can give practically important bias reduction and efficiency gains when the study outcome is MNAR.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0068-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35682082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Li, Ruth Keogh, John P Clancy, Rhonda D Szczesniak
{"title":"Flexible semiparametric joint modeling: an application to estimate individual lung function decline and risk of pulmonary exacerbations in cystic fibrosis.","authors":"Dan Li, Ruth Keogh, John P Clancy, Rhonda D Szczesniak","doi":"10.1186/s12982-017-0067-1","DOIUrl":"https://doi.org/10.1186/s12982-017-0067-1","url":null,"abstract":"<p><strong>Background: </strong>Epidemiologic surveillance of lung function is key to clinical care of individuals with cystic fibrosis, but lung function decline is nonlinear and often impacted by acute respiratory events known as pulmonary exacerbations. Statistical models are needed to simultaneously estimate lung function decline while providing risk estimates for the onset of pulmonary exacerbations, in order to identify relevant predictors of declining lung function and understand how these associations could be used to predict the onset of pulmonary exacerbations.</p><p><strong>Methods: </strong>Using longitudinal lung function (FEV<sub>1</sub>) measurements and time-to-event data on pulmonary exacerbations from individuals in the United States Cystic Fibrosis Registry, we implemented a flexible semiparametric joint model consisting of a mixed-effects submodel with regression splines to fit repeated FEV<sub>1</sub> measurements and a time-to-event submodel for possibly censored data on pulmonary exacerbations. We contrasted this approach with methods currently used in epidemiological studies and highlight clinical implications.</p><p><strong>Results: </strong>The semiparametric joint model had the best fit of all models examined based on deviance information criterion. Higher starting FEV<sub>1</sub> implied more rapid lung function decline in both separate and joint models; however, individualized risk estimates for pulmonary exacerbation differed depending upon model type. Based on shared parameter estimates from the joint model, which accounts for the nonlinear FEV<sub>1</sub> trajectory, patients with more positive rates of change were less likely to experience a pulmonary exacerbation (HR per one standard deviation increase in FEV<sub>1</sub> rate of change = 0.566, 95% CI 0.516-0.619), and having higher absolute FEV<sub>1</sub> also corresponded to lower risk of having a pulmonary exacerbation (HR per one standard deviation increase in FEV<sub>1</sub> = 0.856, 95% CI 0.781-0.937). At the population level, both submodels indicated significant effects of birth cohort, socioeconomic status and respiratory infections on FEV<sub>1</sub> decline, as well as significant effects of gender, socioeconomic status and birth cohort on pulmonary exacerbation risk.</p><p><strong>Conclusions: </strong>Through a flexible joint-modeling approach, we provide a means to simultaneously estimate lung function trajectories and the risk of pulmonary exacerbations for individual patients; we demonstrate how this approach offers additional insights into the clinical course of cystic fibrosis that were not possible using conventional approaches.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0067-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35219501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Jarvis, Gian Luca Di Tanna, Daniel Lewis, Neal Alexander, W John Edmunds
{"title":"Spatial analysis of cluster randomised trials: a systematic review of analysis methods.","authors":"Christopher Jarvis, Gian Luca Di Tanna, Daniel Lewis, Neal Alexander, W John Edmunds","doi":"10.1186/s12982-017-0066-2","DOIUrl":"https://doi.org/10.1186/s12982-017-0066-2","url":null,"abstract":"<p><strong>Background: </strong>Cluster randomised trials (CRTs) often use geographical areas as the unit of randomisation, however explicit consideration of the location and spatial distribution of observations is rare. In many trials, the location of participants will have little importance, however in some, especially against infectious diseases, spillover effects due to participants being located close together may affect trial results. This review aims to identify spatial analysis methods used in CRTs and improve understanding of the impact of spatial effects on trial results.</p><p><strong>Methods: </strong>A systematic review of CRTs containing spatial methods, defined as a method that accounts for the structure, location, or relative distances between observations. We searched three sources: Ovid/Medline, Pubmed, and Web of Science databases. Spatial methods were categorised and details of the impact of spatial effects on trial results recorded.</p><p><strong>Results: </strong>We identified ten papers which met the inclusion criteria, comprising thirteen trials. We found that existing approaches fell into two categories; spatial variables and spatial modelling. The spatial variable approach was most common and involved standard statistical analysis of distance measurements. Spatial modelling is a more sophisticated approach which incorporates the spatial structure of the data within a random effects model. Studies tended to demonstrate the importance of accounting for location and distribution of observations in estimating unbiased effects.</p><p><strong>Conclusions: </strong>There have been a few attempts to control and estimate spatial effects within the context of human CRTs, but our overall understanding is limited. Although spatial effects may bias trial results, their consideration was usually a supplementary, rather than primary analysis. Further work is required to evaluate and develop the spatial methodologies relevant to a range of CRTs.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0066-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35447180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decision trees in epidemiological research.","authors":"Ashwini Venkatasubramaniam, Julian Wolfson, Nathan Mitchell, Timothy Barnes, Meghan JaKa, Simone French","doi":"10.1186/s12982-017-0064-4","DOIUrl":"https://doi.org/10.1186/s12982-017-0064-4","url":null,"abstract":"<p><strong>Background: </strong>In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods.</p><p><strong>Main text: </strong>We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART) technique and the newer Conditional Inference tree (CTree) technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees.</p><p><strong>Conclusions: </strong>Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0064-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35439732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Challenges in modeling complexity of neglected tropical diseases: a review of dynamics of visceral leishmaniasis in resource limited settings.","authors":"Swati DebRoy, Olivia Prosper, Austin Mishoe, Anuj Mubayi","doi":"10.1186/s12982-017-0065-3","DOIUrl":"https://doi.org/10.1186/s12982-017-0065-3","url":null,"abstract":"<p><strong>Objectives: </strong>Neglected tropical diseases (NTD), account for a large proportion of the global disease burden, and their control faces several challenges including diminishing human and financial resources for those distressed from such diseases. Visceral leishmaniasis (VL), the second-largest parasitic killer (after malaria) and an NTD affects poor populations and causes considerable cost to the affected individuals. Mathematical models can serve as a critical and cost-effective tool for understanding VL dynamics, however, complex array of socio-economic factors affecting its dynamics need to be identified and appropriately incorporated within a dynamical modeling framework. This study reviews literature on vector-borne diseases and collects challenges and successes related to the modeling of transmission dynamics of VL. Possible ways of creating a comprehensive mathematical model is also discussed.</p><p><strong>Methods: </strong>Published literature in three categories are reviewed: (i) identifying non-traditional but critical mechanisms for VL transmission in resource limited regions, (ii) mathematical models used for dynamics of Leishmaniasis and other related vector borne infectious diseases and (iii) examples of modeling that have the potential to capture identified mechanisms of VL to study its dynamics.</p><p><strong>Results: </strong>This review suggests that VL elimination have not been achieved yet because existing transmission dynamics models for VL fails to capture relevant local socio-economic risk factors. This study identifies critical risk factors of VL and distribute them in six categories (atmosphere, access, availability, awareness, adherence, and accedence). The study also suggests novel quantitative models, parts of it are borrowed from other non-neglected diseases, for incorporating these factors and using them to understand VL dynamics and evaluating control programs for achieving VL elimination in a resource-limited environment.</p><p><strong>Conclusions: </strong>Controlling VL is expensive for local communities in endemic countries where individuals remain in the vicious cycle of disease and poverty. Smarter public investment in control programs would not only decrease the VL disease burden but will also help to alleviate poverty. However, dynamical models are necessary to evaluate intervention strategies to formulate a cost-effective optimal policy for eradication of VL.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0065-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35535290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Randomised and non-randomised studies to estimate the effect of community-level public health interventions: definitions and methodological considerations.","authors":"Wolf-Peter Schmidt","doi":"10.1186/s12982-017-0063-5","DOIUrl":"https://doi.org/10.1186/s12982-017-0063-5","url":null,"abstract":"<p><strong>Background: </strong>The preferred method to evaluate public health interventions delivered at the level of whole communities is the cluster randomised trial (CRT). The practical limitations of CRTs and the need for alternative methods continue to be debated. There is no consensus on how to classify study designs to evaluate interventions, and how different design features are related to the strength of evidence.</p><p><strong>Analysis: </strong>This article proposes that most study designs for the evaluation of cluster-level interventions fall into four broad categories: the CRT, the non-randomised cluster trial (NCT), the controlled before-and-after study (CBA), and the before-and-after study without control (BA). A CRT needs to fulfil two basic criteria: (1) the intervention is allocated at random; (2) there are sufficient clusters to allow a statistical between-arm comparison. In a NCT, statistical comparison is made across trial arms as in a CRT, but treatment allocation is not random. The defining feature of a CBA is that intervention and control arms are not compared directly, usually because there are insufficient clusters in each arm to allow a statistical comparison. Rather, baseline and follow-up measures of the outcome of interest are compared in the intervention arm, and separately in the control arm. A BA is a CBA without a control group.</p><p><strong>Conclusion: </strong>Each design may provide useful or misleading evidence. A precise baseline measurement of the outcome of interest is critical for causal inference in all studies except CRTs. Apart from statistical considerations the exploration of pre/post trends in the outcome allows a more transparent discussion of study weaknesses than is possible in non-randomised studies without a baseline measure.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0063-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35355971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model checking in multiple imputation: an overview and case study.","authors":"Cattram D Nguyen, John B Carlin, Katherine J Lee","doi":"10.1186/s12982-017-0062-6","DOIUrl":"https://doi.org/10.1186/s12982-017-0062-6","url":null,"abstract":"<p><strong>Background: </strong>Multiple imputation has become very popular as a general-purpose method for handling missing data. The validity of multiple-imputation-based analyses relies on the use of an appropriate model to impute the missing values. Despite the widespread use of multiple imputation, there are few guidelines available for checking imputation models.</p><p><strong>Analysis: </strong>In this paper, we provide an overview of currently available methods for checking imputation models. These include graphical checks and numerical summaries, as well as simulation-based methods such as posterior predictive checking. These model checking techniques are illustrated using an analysis affected by missing data from the Longitudinal Study of Australian Children.</p><p><strong>Conclusions: </strong>As multiple imputation becomes further established as a standard approach for handling missing data, it will become increasingly important that researchers employ appropriate model checking approaches to ensure that reliable results are obtained when using this method.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0062-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35308430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causality in cancer research: a journey through models in molecular epidemiology and their philosophical interpretation.","authors":"Paolo Vineis, Phyllis Illari, Federica Russo","doi":"10.1186/s12982-017-0061-7","DOIUrl":"https://doi.org/10.1186/s12982-017-0061-7","url":null,"abstract":"<p><p>In the last decades, Systems Biology (including cancer research) has been driven by technology, statistical modelling and bioinformatics. In this paper we try to bring biological and philosophical thinking back. We thus aim at making different traditions of thought compatible: (a) causality in epidemiology and in philosophical theorizing-notably, the \"sufficient-component-cause framework\" and the \"mark transmission\" approach; (b) new acquisitions about disease pathogenesis, e.g. the \"branched model\" in cancer, and the role of biomarkers in this process; (c) the burgeoning of omics research, with a large number of \"signals\" and of associations that need to be interpreted. In the paper we summarize first the current views on carcinogenesis, and then explore the relevance of current philosophical interpretations of \"cancer causes\". We try to offer a unifying framework to incorporate biomarkers and omic data into causal models, referring to a position called \"evidential pluralism\". According to this view, causal reasoning is based on both \"evidence of difference-making\" (e.g. associations) and on \"evidence of underlying biological mechanisms\". We conceptualize the way scientists detect and trace signals in terms of <i>information transmission</i>, which is a generalization of the mark transmission theory developed by philosopher Wesley Salmon. Our approach is capable of helping us conceptualize how heterogeneous factors such as micro and macro-biological and psycho-social-are causally linked. This is important not only to understand cancer etiology, but also to design public health policies that target the right <i>causal</i> factors at the macro-level.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0061-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35073207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomohiro Shinozaki, Mohammad Ali Mansournia, Yutaka Matsuyama
{"title":"On hazard ratio estimators by proportional hazards models in matched-pair cohort studies.","authors":"Tomohiro Shinozaki, Mohammad Ali Mansournia, Yutaka Matsuyama","doi":"10.1186/s12982-017-0060-8","DOIUrl":"https://doi.org/10.1186/s12982-017-0060-8","url":null,"abstract":"<p><strong>Background: </strong>In matched-pair cohort studies with censored events, the hazard ratio (HR) may be of main interest. However, it is lesser known in epidemiologic literature that the partial maximum likelihood estimator of a common HR conditional on matched pairs is written in a simple form, namely, the ratio of the numbers of two pair-types. Moreover, because HR is a noncollapsible measure and its constancy across matched pairs is a restrictive assumption, marginal HR as \"average\" HR may be targeted more than conditional HR in analysis.</p><p><strong>Methods: </strong>Based on its simple expression, we provided an alternative interpretation of the common HR estimator as the odds of the matched-pair analog of C-statistic for censored time-to-event data. Through simulations assuming proportional hazards within matched pairs, the influence of various censoring patterns on the marginal and common HR estimators of unstratified and stratified proportional hazards models, respectively, was evaluated. The methods were applied to a real propensity-score matched dataset from the Rotterdam tumor bank of primary breast cancer.</p><p><strong>Results: </strong>We showed that stratified models unbiasedly estimated a common HR under the proportional hazards within matched pairs. However, the marginal HR estimator with robust variance estimator lacks interpretation as an \"average\" marginal HR even if censoring is unconditionally independent to event, unless no censoring occurs or no exposure effect is present. Furthermore, the exposure-dependent censoring biased the marginal HR estimator away from both conditional HR and an \"average\" marginal HR irrespective of whether exposure effect is present. From the matched Rotterdam dataset, we estimated HR for relapse-free survival of absence versus presence of chemotherapy; estimates (95% confidence interval) were 1.47 (1.18-1.83) for common HR and 1.33 (1.13-1.57) for marginal HR.</p><p><strong>Conclusion: </strong>The simple expression of the common HR estimator would be a useful summary of exposure effect, which is less sensitive to censoring patterns than the marginal HR estimator. The common and the marginal HR estimators, both relying on distinct assumptions and interpretations, are complementary alternatives for each other.</p>","PeriodicalId":39896,"journal":{"name":"Emerging Themes in Epidemiology","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s12982-017-0060-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35072015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}