{"title":"Bayesian Estimation of Topological Features of Persistence Diagrams","authors":"Asael Fabian Mart'inez","doi":"10.1214/22-ba1341","DOIUrl":"https://doi.org/10.1214/22-ba1341","url":null,"abstract":"Persistent homology is a common technique in topological data analysis providing geometrical and topological information about the sample space. All this information, known as topological features, is summarized in persistence diagrams, and the main interest is in identifying the most persisting ones since they correspond to the Betti number values. Given the randomness inherent in the sampling process, and the complex structure of the space where persistence diagrams take values, estimation of Betti numbers is not straightforward. The approach followed in this work makes use of features’ lifetimes and provides a full Bayesian clustering model, based on random partitions, in order to estimate Betti numbers. A simulation study is also presented. An extensive simulation study was done using different synthetic cloud point data in order to understand the performance of the proposed methodology. Based on the scenarios described in Section 4, the sample size will be set to 𝑛 = 300 , 600 , and 900 , and the separation of circles ranges from 1 to 5 for the cases 𝑟 = 2, 3 . For each cloud point data, 𝑠 = 100 replications were simulated, and each estimator for 𝛽 0 was computed.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42387151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hunanyan Sona, Rue Håvard, Plummer Martyn, Roos Małgorzata
{"title":"Quantification of Empirical Determinacy: The Impact of Likelihood Weighting on Posterior Location and Spread in Bayesian Meta-Analysis Estimated with JAGS and INLA","authors":"Hunanyan Sona, Rue Håvard, Plummer Martyn, Roos Małgorzata","doi":"10.1214/22-ba1325","DOIUrl":"https://doi.org/10.1214/22-ba1325","url":null,"abstract":"The popular Bayesian meta-analysis expressed by Bayesian normal-normal hierarchical model (NNHM) synthesizes knowledge from several studies and is highly relevant in practice. Moreover, NNHM is the simplest Bayesian hierarchical model (BHM), which illustrates problems typical in more complex BHMs. Until now, it has been unclear to what extent the data determines the marginal posterior distributions of the parameters in NNHM. To address this issue we computed the second derivative of the Bhattacharyya coefficient with respect to the weighted likelihood, defined the total empirical determinacy (TED), the proportion of the empirical determinacy of location to TED (pEDL), and the proportion of the empirical determinacy of spread to TED (pEDS). We implemented this method in the R package texttt{ed4bhm} and considered two case studies and one simulation study. We quantified TED, pEDL and pEDS under different modeling conditions such as model parametrization, the primary outcome, and the prior. This clarified to what extent the location and spread of the marginal posterior distributions of the parameters are determined by the data. Although these investigations focused on Bayesian NNHM, the method proposed is applicable more generally to complex BHMs.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49057422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Gaussian Processes for Calibration of Computer Models","authors":"Sébastien Marmin, M. Filippone","doi":"10.1214/21-ba1293","DOIUrl":"https://doi.org/10.1214/21-ba1293","url":null,"abstract":". Bayesian calibration of black-box computer models offers an estab-lished framework for quantification of uncertainty of model parameters and predictions. Traditional Bayesian calibration involves the emulation of the computer model and an additive model discrepancy term using Gaussian processes; inference is then carried out using Markov chain Monte Carlo. This calibration approach is limited by the poor scalability of Gaussian processes and by the need to specify a sensible covariance function to deal with the complexity of the computer model and the discrepancy. In this work, we propose a novel calibration framework, where these challenges are addressed by means of compositions of Gaussian processes into Deep Gaussian processes and scalable variational inference techniques. Thanks to this formulation, it is possible to obtain a flexible calibration approach, which is easy to implement in development environments featuring automatic dif-ferentiation and exploiting GPU-type hardware. We show how our proposal yields a powerful alternative to the state-of-the-art by means of experimental validations on various calibration problems. We conclude the paper by showing how we can carry out adaptive experimental design, and by discussing the identifiability properties of the proposed calibration model.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45174347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Informative Priors for the Consensus Ranking in the Bayesian Mallows Model","authors":"Marta Crispino, Isadora Antoniano-Villalobos","doi":"10.1214/22-ba1307","DOIUrl":"https://doi.org/10.1214/22-ba1307","url":null,"abstract":"The aim of this work is to study the problem of prior elicitation for the consensus ranking in the Mallows model with Spearman’s distance, a popular distance-based model for rankings or permutation data. Previous Bayesian inference for such a model has been limited to the use of the uniform prior over the space of permutations. We present a novel strategy to elicit informative prior beliefs on the location parameter of the model, discussing the interpretation of hyper-parameters and the implication of prior choices for the posterior analysis.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47603104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Image-on-Scalar Regression with a Spatial Global-Local Spike-and-Slab Prior","authors":"Zijian Zeng, Meng Li, M. Vannucci","doi":"10.1214/22-ba1352","DOIUrl":"https://doi.org/10.1214/22-ba1352","url":null,"abstract":"In this article, we propose a novel spatial global-local spike-and-slab selection prior for image-on-scalar regression. We consider a Bayesian hierarchical Gaussian process model for image smoothing, that uses a flexible Inverse-Wishart process prior to handle within-image dependency, and propose a general global-local spatial selection prior that extends a rich class of well-studied selection priors. Unlike existing constructions, we achieve simultaneous global (i.e, at covariate-level) and local (i.e., at pixel/voxel-level) selection by introducing `participation rate' parameters that measure the probability for the individual covariates to affect the observed images. This along with a hard-thresholding strategy leads to dependency between selections at the two levels, introduces extra sparsity at the local level, and allows the global selection to be informed by the local selection, all in a model-based manner. We design an efficient Gibbs sampler that allows inference for large image data. We show on simulated data that parameters are interpretable and lead to efficient selection. Finally, we demonstrate performance of the proposed model by using data from the Autism Brain Imaging Data Exchange (ABIDE) study. To the best of our knowledge, the proposed model construction is the first in the Bayesian literature to simultaneously achieve image smoothing, parameter estimation and a two-level variable selection for image-on-scalar regression.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":"1 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41986886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Stephens, Widemberg S. Nobre, E. Moodie, A. M. Schmidt
{"title":"Causal Inference Under Mis-Specification: Adjustment Based on the Propensity Score","authors":"D. Stephens, Widemberg S. Nobre, E. Moodie, A. M. Schmidt","doi":"10.1214/22-ba1322","DOIUrl":"https://doi.org/10.1214/22-ba1322","url":null,"abstract":"We study Bayesian approaches to causal inference via propensity score regression. Much of the Bayesian literature on propensity score methods have relied on approaches that cannot be viewed as fully Bayesian in the context of conventional `likelihood times prior' posterior inference; in addition, most methods rely on parametric and distributional assumptions, and presumed correct specification. We emphasize that causal inference is typically carried out in settings of mis-specification, and develop strategies for fully Bayesian inference that reflect this. We focus on methods based on decision-theoretic arguments, and show how inference based on loss-minimization can give valid and fully Bayesian inference. We propose a computational approach to inference based on the Bayesian bootstrap which has good Bayesian and frequentist properties.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47108110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Sparse Spiked Covariance Model with a Continuous Matrix Shrinkage Prior","authors":"Fangzheng Xie, J. Cape, C. Priebe, Yanxun Xu","doi":"10.1214/21-ba1292","DOIUrl":"https://doi.org/10.1214/21-ba1292","url":null,"abstract":". We propose a Bayesian methodology for estimating spiked covariance matrices with a jointly sparse structure in high dimensions. The spiked covariance matrix is reparameterized in terms of the latent factor model, where the loading matrix is equipped with a novel matrix spike-and-slab LASSO prior, which is a continuous shrinkage prior for modeling jointly sparse matrices. We establish the rate-optimal posterior contraction for the covariance matrix with respect to the spectral norm as well as that for the principal subspace with respect to the projection spectral norm loss. We also study the posterior contraction rate of the principal subspace with respect to the two-to-infinity norm loss, a novel loss function measuring the distance between subspaces that is able to capture entrywise eigenvector perturbations. We show that the posterior contraction rate with respect to the two-to-infinity norm loss is tighter than that with respect to the routinely used projection spectral norm loss under certain low-rank and bounded coherence conditions. In addition, a point estimator for the principal subspace is proposed with the rate-optimal risk bound with respect to the projection spectral norm loss. The numerical performance of the proposed methodology is assessed through synthetic examples and the analysis of a real-world face data example. 62H25, 62C10; secondary 62H12.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46683181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Controlling the Flexibility of Non-Gaussian Processes Through Shrinkage Priors","authors":"Rafael Cabral, D. Bolin, H. Rue","doi":"10.1214/22-ba1342","DOIUrl":"https://doi.org/10.1214/22-ba1342","url":null,"abstract":"The normal inverse Gaussian (NIG) and generalized asymmetric Laplace (GAL) distributions can be seen as skewed and semi-heavy-tailed extensions of the Gaussian distribution. Models driven by these more flexible noise distributions are then re-garded as flexible extensions of simpler Gaussian models. Inferential procedures tend to overestimate the degree of non-Gaussianity in the data and therefore we propose controlling the flexibility of these non-Gaussian models by adding sensible priors in the inferential framework that contract the model towards Gaussianity. In our venture to derive sensible priors, we also propose a new intuitive parameterization of the non-Gaussian models and discuss how to implement them efficiently in Stan . The methods are derived for a generic class of non-Gaussian models that include spatial Mat´ern fields, autoregressive models for time series, and simultaneous autoregressive models for aerial data. The results are illustrated with a simulation study and geostatistics application, where priors that penalize model complexity were shown to lead to more robust estimation and give preference to the Gaussian model, while at the same time allowing for non-Gaussianity if there is sufficient evidence in the data.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44689926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Hamura, T. Onizuka, Shintaro Hashimoto, S. Sugasawa
{"title":"Sparse Bayesian Inference on Gamma-Distributed Observations Using Shape-Scale Inverse-Gamma Mixtures","authors":"Y. Hamura, T. Onizuka, Shintaro Hashimoto, S. Sugasawa","doi":"10.1214/22-ba1348","DOIUrl":"https://doi.org/10.1214/22-ba1348","url":null,"abstract":"In various applications, we deal with high-dimensional positive-valued data that often exhibits sparsity. This paper develops a new class of continuous global-local shrinkage priors tailored to analyzing gamma-distributed observations where most of the underlying means are concentrated around a certain value. Unlike existing shrinkage priors, our new prior is a shape-scale mixture of inverse-gamma distributions, which has a desirable interpretation of the form of posterior mean and admits flexible shrinkage. We show that the proposed prior has two desirable theoretical properties; KullbackLeibler super-efficiency under sparsity and robust shrinkage rules for large observations. We propose an efficient sampling algorithm for posterior inference. The performance of the proposed method is illustrated through simulation and two real data examples, the average length of hospital stay for COVID-19 in South Korea and adaptive variance estimation of gene expression data.","PeriodicalId":55398,"journal":{"name":"Bayesian Analysis","volume":" ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45684625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}