Twofold structure of duality in Bayesian model averaging

Toshio Ohnishi, T. Yanagimoto
{"title":"Twofold structure of duality in Bayesian model averaging","authors":"Toshio Ohnishi, T. Yanagimoto","doi":"10.14490/JJSS.43.29","DOIUrl":null,"url":null,"abstract":"Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.","PeriodicalId":326924,"journal":{"name":"Journal of the Japan Statistical Society. Japanese issue","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Japan Statistical Society. Japanese issue","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14490/JJSS.43.29","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.
贝叶斯模型平均中对偶的双重结构
采用双Kullback-Leibler散度损失、e-散度损失和m-散度损失研究了模型平均情况下的两个贝叶斯预测问题。我们证明了两种损失下的最优预测器满足有趣的鞍点型等式。实际上,e散度损失下的最优预测器平衡了对数似然比和损失,而m散度损失下的最优预测器平衡了香农熵差和损失。这些等式也适用于e-散度损失和m-散度损失下对数似然和香农熵分别最大化的预测因子,表明适度扩大对数似然和香农熵将导致最优的预测因子。在每一个散度损失的情况下,我们得到一个鲁棒预测器,在某种意义上,它的后验风险是恒定的,通过最小化一个特定的凸函数。由该凸函数导出的勒让德变换表明,每个贝叶斯预测问题都存在固有的对偶性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信