基于全Bregman散度的鲁棒高效正则化推进。

Meizhu Liu, Baba C Vemuri
{"title":"基于全Bregman散度的鲁棒高效正则化推进。","authors":"Meizhu Liu, Baba C Vemuri","doi":"10.1109/CVPR.2011.5995686","DOIUrl":null,"url":null,"abstract":"<p><p>Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized Boosting algorithms. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases and comparisons to several other state-of-the-art methods. Numerical results show that the proposed algorithm has much improved performance in efficiency and accuracy over other methods.</p>","PeriodicalId":74560,"journal":{"name":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"2011 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2011-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2011.5995686","citationCount":"9","resultStr":"{\"title\":\"Robust and Efficient Regularized Boosting Using Total Bregman Divergence.\",\"authors\":\"Meizhu Liu, Baba C Vemuri\",\"doi\":\"10.1109/CVPR.2011.5995686\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized Boosting algorithms. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases and comparisons to several other state-of-the-art methods. Numerical results show that the proposed algorithm has much improved performance in efficiency and accuracy over other methods.</p>\",\"PeriodicalId\":74560,\"journal\":{\"name\":\"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition\",\"volume\":\"2011 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/CVPR.2011.5995686\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPR.2011.5995686\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2011.5995686","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

Boosting是一种众所周知的机器学习技术,用于提高弱学习者的性能,并已成功应用于计算机视觉、医学图像分析、计算生物学等领域。数据样本分布的更新是增强算法的一个关键步骤,然而,大多数现有的增强算法使用的更新机制会导致分布演变过程中的过拟合和不稳定,从而导致分类不准确。文献中提出了一种克服这些困难的方法。在本文中,我们提出了一种新的全Bregman散度(tBD)正则化LPBoost,称为tBRLPBoost。tBD是最近在文献中提出的分歧,它具有统计鲁棒性,我们证明了tBRLPBoost需要恒定次数的迭代来学习强分类器,因此与其他正则化boost算法相比,计算效率更高。此外,与其他仅对少数数据集有效的增强方法不同,tBRLPBoost可以很好地处理各种数据集。我们展示了在许多公共领域数据库上测试我们的算法的结果,并与其他几种最先进的方法进行了比较。数值结果表明,该算法在效率和精度上都比其他方法有很大提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Robust and Efficient Regularized Boosting Using Total Bregman Divergence.

Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized Boosting algorithms. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases and comparisons to several other state-of-the-art methods. Numerical results show that the proposed algorithm has much improved performance in efficiency and accuracy over other methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
43.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信