Learned Harmonic Mean Estimation of the Marginal Likelihood with Normalizing Flows

Alicja Polanska, Matthew Alexander Price, A. Mancini, J. McEwen
{"title":"Learned Harmonic Mean Estimation of the Marginal Likelihood with Normalizing Flows","authors":"Alicja Polanska, Matthew Alexander Price, A. Mancini, J. McEwen","doi":"10.3390/psf2023009010","DOIUrl":null,"url":null,"abstract":"Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e. by lowering its\"temperature\", ensuring its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimisation problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.","PeriodicalId":506244,"journal":{"name":"The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering","volume":"62 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/psf2023009010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e. by lowering its"temperature", ensuring its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimisation problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.
边际似然的学习谐波平均值估计与归一化流量
计算边际似然(也称为贝叶斯模型证据)是贝叶斯模型选择中的一项重要任务,它为比较模型提供了一种原则性的定量方法。学习调和均值估计器解决了边际似然的原始调和均值估计的方差爆炸问题。学习调和均值估计器学习近似最优分布的重要性采样目标分布。虽然近似值不必非常精确,但为了避免方差爆炸问题,学习到的分布的概率质量必须包含在后验中。在以往的工作中,为了确保这一特性得到满足,我们在训练模型时引入了一个定制优化问题。在本文中,我们介绍了使用归一化流量来表示重要性采样目标分布。通过最大似然估计法对后验样本进行基于流量的模型训练。然后,通过降低基础分布的方差(即降低其 "温度")来集中流量的概率密度,确保其概率质量包含在后验中。这种方法避免了定制优化问题和仔细微调参数的需要,因而是一种更稳健的方法。此外,使用归一化流量还有可能扩展到高维设置。我们展示了初步实验,证明了学习谐波均值估计器使用流量的有效性。实现已学谐波均值的谐波代码已公开发布,现在已更新为支持归一化流量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信