{"title":"Editorial Statistics","authors":"M. Ristić, M. Duijn, Nan Geloven","doi":"10.1007/s10679-006-6982-6","DOIUrl":"https://doi.org/10.1007/s10679-006-6982-6","url":null,"abstract":"","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"28 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87725277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Ariyo, E. Lesaffre, G. Verbeke, M. Huisman, Judith Rijnhart, Martijn Heymans, J. Twisk
{"title":"Bayesian model selection for multilevel mediation models","authors":"O. Ariyo, E. Lesaffre, G. Verbeke, M. Huisman, Judith Rijnhart, Martijn Heymans, J. Twisk","doi":"10.1111/stan.12256","DOIUrl":"https://doi.org/10.1111/stan.12256","url":null,"abstract":"Mediation analysis is often used to explore the complex relationship between two variables through a third mediating variable. This paper aims to illustrate the performance of the deviance information criterion, the pseudo‐Bayes factor, and the Watanabe–Akaike information criterion in selecting the appropriate multilevel mediation model. Our focus will be on comparing the conditional criteria (given random effects) versus the marginal criteria (averaged over random effects) in this respect. Most of the previous work on the multilevel mediation models fails to report the poor behavior of the conditional criteria. We demonstrate here the superiority of the marginal version of the selection criteria over their conditional counterpart in the mediated longitudinal settings through simulation studies and via an application to data from the Longitudinal Aging Study of the Amsterdam study. In addition, we demonstrate the usefulness of our self‐written R function for multilevel mediation models.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"49 1","pages":"219 - 235"},"PeriodicalIF":1.5,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75993438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Competing risks regression for clustered survival data via the marginal additive subdistribution hazards model","authors":"Xinyuan Chen, D. Esserman, Fan Li","doi":"10.1111/stan.12317","DOIUrl":"https://doi.org/10.1111/stan.12317","url":null,"abstract":"A population‐averaged additive subdistribution hazards model is proposed to assess the marginal effects of covariates on the cumulative incidence function and to analyze correlated failure time data subject to competing risks. This approach extends the population‐averaged additive hazards model by accommodating potentially dependent censoring due to competing events other than the event of interest. Assuming an independent working correlation structure, an estimating equations approach is outlined to estimate the regression coefficients and a new sandwich variance estimator is proposed. The proposed sandwich variance estimator accounts for both the correlations between failure times and between the censoring times, and is robust to misspecification of the unknown dependency structure within each cluster. We further develop goodness‐of‐fit tests to assess the adequacy of the additive structure of the subdistribution hazards for the overall model and each covariate. Simulation studies are conducted to investigate the performance of the proposed methods in finite samples. We illustrate our methods using data from the STrategies to Reduce Injuries and Develop confidence in Elders (STRIDE) trial.This article is protected by copyright. All rights reserved.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"58 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90297672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint probabilities under expected value constraints, transportation problems, maximum entropy in the mean","authors":"H. Gzyl, Silvia Mayoral","doi":"10.1111/stan.12314","DOIUrl":"https://doi.org/10.1111/stan.12314","url":null,"abstract":"There are interesting extensions of the problem of determining a joint probability with known marginals. On the one hand, one may impose size constraints on the joint probabilities. On the other, one may impose additional constraints like the expected values of known random variables. If we think of the marginal probabilities as demands or supplies, and of the joint probability as the fraction of the supplies to be shipped from the production sites to the demand sites, instead of joint probabilities we can think of transportation policies. Clearly, fixing the cost of a transportation policy is equivalent to an integral constraints upon the joint probability. We will show how to solve the cost constrained transportation problem by means of the method of maximum entropy in the mean. We shall also show how this approach leads to an interior point like method to solve the associated linear programming problem. We shall also investigate some geometric structure the space of transportation policies, or joint probabilities or pixel space, using a Riemannian structure associated with the dual of the entropy used to determine bounds between probabilities or between transportation policies.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"20 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81947429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Logistic or not Logistic?","authors":"J. Allison, B. Ebner, M. Smuts","doi":"10.1111/stan.12292","DOIUrl":"https://doi.org/10.1111/stan.12292","url":null,"abstract":"We propose a new class of goodness‐of‐fit tests for the logistic distribution based on a characterization related to the density approach in the context of Stein's method. This characterization‐based test is a first of its kind for the logistic distribution. The asymptotic null distribution of the test statistic is derived and it is shown that the test is consistent against fixed alternatives. The finite sample power performance of the newly proposed class of tests is compared to various existing tests by means of a Monte Carlo study. It is found that this new class of tests are especially powerful when the alternative distributions are heavy tailed, like Student's t and Cauchy, or for skew alternatives such as the log‐normal, gamma and chi‐square distributions.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"74 1","pages":"429 - 443"},"PeriodicalIF":1.5,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80832521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bootstrap for integer‐valued GARCH(p, q) processes","authors":"M. Neumann","doi":"10.1111/stan.12238","DOIUrl":"https://doi.org/10.1111/stan.12238","url":null,"abstract":"We consider integer‐valued processes with a linear or nonlinear generalized autoregressive conditional heteroscedastic models structure, where the count variables given the past follow a Poisson distribution. We show that a contraction condition imposed on the intensity function yields a contraction property of the Markov kernel of the process. This allows almost effortless proofs of the existence and uniqueness of a stationary distribution as well as of absolute regularity of the count process. As our main result, we construct a coupling of the original process and a model‐based bootstrap counterpart. Using a contraction property of the Markov kernel of the coupled process we obtain bootstrap consistency for different types of statistics.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"7 1","pages":"343 - 363"},"PeriodicalIF":1.5,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84383409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Goodness‐of‐fit tests for Poisson count time series based on the Stein–Chen identity","authors":"Boris Aleksandrov, C. Weiß, C. Jentsch","doi":"10.1111/stan.12252","DOIUrl":"https://doi.org/10.1111/stan.12252","url":null,"abstract":"To test the null hypothesis of a Poisson marginal distribution, test statistics based on the Stein–Chen identity are proposed. For a wide class of Poisson count time series, the asymptotic distribution of different types of Stein–Chen statistics is derived, also if multiple statistics are jointly applied. The performance of the tests is analyzed with simulations, as well as the question which Stein–Chen functions should be used for which alternative. Illustrative data examples are presented, and possible extensions of the novel Stein–Chen approach are discussed as well.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"107 1","pages":"35 - 64"},"PeriodicalIF":1.5,"publicationDate":"2021-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88392751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying crime generators and spatially overlapping high‐risk areas through a nonlinear model: A comparison between three cities of the Valencian region (Spain)","authors":"Á. Briz‐Redón, J. Mateu, F. Montes","doi":"10.1111/stan.12254","DOIUrl":"https://doi.org/10.1111/stan.12254","url":null,"abstract":"The behavior and spatial distribution of crime events can be explained through the characterization of an area in terms of its demography, socioeconomy, and built environment. In particular, recent studies on the incidence of crime in a city have focused on the identification of features of the built environment (specific places or facilities) that may increase crime risk within a certain radius. However, it is hard to identify environmental characteristics that consistently explain crime occurrence across cities and crime types. This article focuses on the assessment of the effect that certain types of places have on the incidence of property crime, robbery, and vandalism in three cities of the Valencian region (Spain): Alicante, Castellon, and Valencia. A nonlinear effects model is used to identify such places and to construct a risk map over the three cities considering the three crime types under research. The results obtained suggest that there are remarkable differences across cities and crime types in terms of the types of places associated with crime outcomes. The identification of high‐risk areas allows verifying that crime is highly concentrated, and also that there is a high level of spatial overlap between the high‐risk areas corresponding to different crime types.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"46 1","pages":"120 - 97"},"PeriodicalIF":1.5,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81648311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust prediction of domain compositions from uncertain data using isometric logratio transformations in a penalized multivariate Fay–Herriot model","authors":"J. Krause, J. P. Burgard, D. Morales","doi":"10.1111/stan.12253","DOIUrl":"https://doi.org/10.1111/stan.12253","url":null,"abstract":"Assessing regional population compositions is an important task in many research fields. Small area estimation with generalized linear mixed models marks a powerful tool for this purpose. However, the method has limitations in practice. When the data are subject to measurement errors, small area models produce inefficient or biased results since they cannot account for data uncertainty. This is particularly problematic for composition prediction, since generalized linear mixed models often rely on approximate likelihood inference. Obtained predictions are not reliable. We propose a robust multivariate Fay–Herriot model to solve these issues. It combines compositional data analysis with robust optimization theory. The nonlinear estimation of compositions is restated as a linear problem through isometric logratio transformations. Robust model parameter estimation is performed via penalized maximum likelihood. A robust best predictor is derived. Simulations are conducted to demonstrate the effectiveness of the approach. An application to alcohol consumption in Germany is provided.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"47 1","pages":"65 - 96"},"PeriodicalIF":1.5,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81420779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information anchored reference‐based sensitivity analysis for truncated normal data with application to survival analysis","authors":"A. Atkinson, S. Cro, J. Carpenter, M. Kenward","doi":"10.1111/stan.12250","DOIUrl":"https://doi.org/10.1111/stan.12250","url":null,"abstract":"The primary analysis of time‐to‐event data typically makes the censoring at random assumption, that is, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved. In such cases, we need to explore the robustness of inference to more pragmatic assumptions about patients post‐censoring in sensitivity analyses. Reference‐based multiple imputation, which avoids analysts explicitly specifying the parameters of the unobserved data distribution, has proved attractive to researchers. Building on results for longitudinal continuous data, we show that inference using a Tobit regression imputation model for reference‐based sensitivity analysis with right censored log normal data is information anchored, meaning the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. We illustrate our theoretical results using simulation and a clinical trial case study.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":"62 1","pages":"500 - 523"},"PeriodicalIF":1.5,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85623926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}