l1正则贝叶斯回归的先验相关性

Chris Hans
{"title":"l1正则贝叶斯回归的先验相关性","authors":"Chris Hans","doi":"10.11159/icsta22.007","DOIUrl":null,"url":null,"abstract":"The regularization of regression coefficients has become a central component of research in the statistical sciences due to its importance in applied data analysis in many other fields of science. From a Bayesian perspective, regularization is imposed naturally via prior distributions that probabilistically penalize large values of the coefficients. Research into prior distributions with connections to L1-norm penalization (e.g., “Bayesian lasso” and the “Bayesian elastic net”) has generated important insights about the nature of Bayesian penalized regression in practice. Though widely used, many such priors are restricted by the assumption that the regression coefficients are a priori independent. While independence may be reasonable in some data-analytic settings, having the ability to incorporate dependence in these prior distributions would allow for greater modeling flexibility. I describe a general class of “orthant normal” priors for regression coefficients that allows for prior dependence between regression coefficients. An interesting special case is an L1-regularized version of Zellner’s g prior. Though simulation-based posterior inference via Markov chain Monte Carlo methods is made difficult by an intractable function in the posterior density, I discuss computationally efficient methods for estimating this function that allow for full posterior inference about all model parameters.","PeriodicalId":325859,"journal":{"name":"Proceedings of the 4th International Conference on Statistics: Theory and Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prior Dependence in L1-regularized Bayesian Regression\",\"authors\":\"Chris Hans\",\"doi\":\"10.11159/icsta22.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The regularization of regression coefficients has become a central component of research in the statistical sciences due to its importance in applied data analysis in many other fields of science. From a Bayesian perspective, regularization is imposed naturally via prior distributions that probabilistically penalize large values of the coefficients. Research into prior distributions with connections to L1-norm penalization (e.g., “Bayesian lasso” and the “Bayesian elastic net”) has generated important insights about the nature of Bayesian penalized regression in practice. Though widely used, many such priors are restricted by the assumption that the regression coefficients are a priori independent. While independence may be reasonable in some data-analytic settings, having the ability to incorporate dependence in these prior distributions would allow for greater modeling flexibility. I describe a general class of “orthant normal” priors for regression coefficients that allows for prior dependence between regression coefficients. An interesting special case is an L1-regularized version of Zellner’s g prior. Though simulation-based posterior inference via Markov chain Monte Carlo methods is made difficult by an intractable function in the posterior density, I discuss computationally efficient methods for estimating this function that allow for full posterior inference about all model parameters.\",\"PeriodicalId\":325859,\"journal\":{\"name\":\"Proceedings of the 4th International Conference on Statistics: Theory and Applications\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 4th International Conference on Statistics: Theory and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.11159/icsta22.007\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Statistics: Theory and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11159/icsta22.007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

回归系数的正则化已成为统计科学研究的核心组成部分,因为它在许多其他科学领域的应用数据分析中具有重要意义。从贝叶斯的角度来看,正则化是通过概率上惩罚系数大值的先验分布自然施加的。对与l1范数惩罚相关的先验分布(例如,“贝叶斯套索”和“贝叶斯弹性网”)的研究已经在实践中产生了关于贝叶斯惩罚回归本质的重要见解。虽然被广泛使用,但许多这样的先验受到回归系数是先验独立的假设的限制。虽然独立性在某些数据分析设置中可能是合理的,但是在这些先前分布中合并依赖性的能力将允许更大的建模灵活性。我描述了回归系数的一类“正交正态”先验,它允许回归系数之间的先验依赖。一个有趣的特例是Zellner的g先验的l1正则化版本。虽然通过马尔可夫链蒙特卡罗方法进行基于模拟的后验推理由于后验密度中的一个难以处理的函数而变得困难,但我讨论了用于估计该函数的计算效率方法,该方法允许对所有模型参数进行完整的后验推理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Prior Dependence in L1-regularized Bayesian Regression
The regularization of regression coefficients has become a central component of research in the statistical sciences due to its importance in applied data analysis in many other fields of science. From a Bayesian perspective, regularization is imposed naturally via prior distributions that probabilistically penalize large values of the coefficients. Research into prior distributions with connections to L1-norm penalization (e.g., “Bayesian lasso” and the “Bayesian elastic net”) has generated important insights about the nature of Bayesian penalized regression in practice. Though widely used, many such priors are restricted by the assumption that the regression coefficients are a priori independent. While independence may be reasonable in some data-analytic settings, having the ability to incorporate dependence in these prior distributions would allow for greater modeling flexibility. I describe a general class of “orthant normal” priors for regression coefficients that allows for prior dependence between regression coefficients. An interesting special case is an L1-regularized version of Zellner’s g prior. Though simulation-based posterior inference via Markov chain Monte Carlo methods is made difficult by an intractable function in the posterior density, I discuss computationally efficient methods for estimating this function that allow for full posterior inference about all model parameters.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信