{"title":"Snowballing Nested Sampling","authors":"Johannes Buchner","doi":"10.3390/psf2023009017","DOIUrl":"https://doi.org/10.3390/psf2023009017","url":null,"abstract":"A new way to run nested sampling, combined with realistic MCMC proposals to generate new live points, is presented. Nested sampling is run with a fixed number of MCMC steps. Subsequently, snowballing nested sampling extends the run to more and more live points. This stabilizes MCMC proposals over time, and leads to pleasant properties, including that the number of live points and number of MCMC steps do not have to be calibrated, that the evidence and posterior approximation improves as more compute is added and can be diagnosed with convergence diagnostics from the MCMC literature. Snowballing nested sampling converges to a ``perfect'' nested sampling run with infinite number of MCMC steps.","PeriodicalId":506244,"journal":{"name":"The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering","volume":"2015 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139351046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. McEwen, T. Liaudat, Matthew Alexander Price, Xiaohao Cai, M. Pereyra
{"title":"Proximal Nested Sampling with Data-Driven Priors for Physical Scientists","authors":"J. McEwen, T. Liaudat, Matthew Alexander Price, Xiaohao Cai, M. Pereyra","doi":"10.3390/psf2023009013","DOIUrl":"https://doi.org/10.3390/psf2023009013","url":null,"abstract":"Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold. First, we review proximal nested sampling in a pedagogical manner in an attempt to elucidate the framework for physical scientists. Second, we show how proximal nested sampling can be extended in an empirical Bayes setting to support data-driven priors, such as deep neural networks learned from training data.","PeriodicalId":506244,"journal":{"name":"The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139367590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alicja Polanska, Matthew Alexander Price, A. Mancini, J. McEwen
{"title":"Learned Harmonic Mean Estimation of the Marginal Likelihood with Normalizing Flows","authors":"Alicja Polanska, Matthew Alexander Price, A. Mancini, J. McEwen","doi":"10.3390/psf2023009010","DOIUrl":"https://doi.org/10.3390/psf2023009010","url":null,"abstract":"Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e. by lowering its\"temperature\", ensuring its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimisation problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.","PeriodicalId":506244,"journal":{"name":"The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139366406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}