{"title":"Twofold structure of duality in Bayesian model averaging","authors":"Toshio Ohnishi, T. Yanagimoto","doi":"10.14490/JJSS.43.29","DOIUrl":null,"url":null,"abstract":"Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.","PeriodicalId":326924,"journal":{"name":"Journal of the Japan Statistical Society. Japanese issue","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Japan Statistical Society. Japanese issue","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14490/JJSS.43.29","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.