AdaBoost.SDM: Similarity and dissimilarity-based manifold regularized adaptive boosting algorithm

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Azamat Mukhamediya, Amin Zollanvari
{"title":"AdaBoost.SDM: Similarity and dissimilarity-based manifold regularized adaptive boosting algorithm","authors":"Azamat Mukhamediya,&nbsp;Amin Zollanvari","doi":"10.1016/j.patrec.2025.05.016","DOIUrl":null,"url":null,"abstract":"<div><div>AdaBoost is a successful ensemble learning algorithm that generates a sequence of base learners, where each base learner is encouraged to focus more on those data points that are misclassified by the previous learner. That being said, AdaBoost, in its original form, lacks any mechanism to explicitly leverage the underlying geometric structure of data or manifold. Recent studies have shown that a training process that penalizes model outputs that do not align with the data manifold can lead to better generalization. In this paper, we aim to define a convex objective function for training AdaBoost that enforces a smooth variation of the model predictions over the data manifold. In this regard, we adopt a mixed-graph Laplacian that in contrast with the conventional Laplacian regularization can handle both label similarity and dissimilarity knowledge between data points. Compared with the original form of AdaBoost, the results demonstrate the effectiveness of the proposed similarity and dissimilarity-based manifold regularized AdaBoost (AdaBoost.SDM) in exploiting the data manifold and, at the same time, encoding the label similarity and dissimilarity to improve the classification performance. Our experimental results show that AdaBoost.SDM is highly competitive with state-of-the-art manifold regularized algorithms, including LapRLS and LapSVM.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 66-71"},"PeriodicalIF":3.3000,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525002090","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

AdaBoost is a successful ensemble learning algorithm that generates a sequence of base learners, where each base learner is encouraged to focus more on those data points that are misclassified by the previous learner. That being said, AdaBoost, in its original form, lacks any mechanism to explicitly leverage the underlying geometric structure of data or manifold. Recent studies have shown that a training process that penalizes model outputs that do not align with the data manifold can lead to better generalization. In this paper, we aim to define a convex objective function for training AdaBoost that enforces a smooth variation of the model predictions over the data manifold. In this regard, we adopt a mixed-graph Laplacian that in contrast with the conventional Laplacian regularization can handle both label similarity and dissimilarity knowledge between data points. Compared with the original form of AdaBoost, the results demonstrate the effectiveness of the proposed similarity and dissimilarity-based manifold regularized AdaBoost (AdaBoost.SDM) in exploiting the data manifold and, at the same time, encoding the label similarity and dissimilarity to improve the classification performance. Our experimental results show that AdaBoost.SDM is highly competitive with state-of-the-art manifold regularized algorithms, including LapRLS and LapSVM.
演算法。基于相似性和不相似性的流形正则化自适应增强算法
AdaBoost是一个成功的集成学习算法,它生成一系列基础学习器,其中每个基础学习器被鼓励更多地关注那些被前一个学习器错误分类的数据点。话虽如此,AdaBoost在其原始形式中,缺乏任何机制来明确地利用数据或流形的底层几何结构。最近的研究表明,惩罚与数据流形不一致的模型输出的训练过程可以导致更好的泛化。在本文中,我们的目标是为训练AdaBoost定义一个凸目标函数,该函数在数据流形上强制模型预测的平滑变化。在这方面,我们采用混合图拉普拉斯,与传统的拉普拉斯正则化相比,它可以处理数据点之间的标签相似度和不相似度知识。与AdaBoost的原始形式进行比较,结果证明了本文提出的基于相似度和不相似度的流形正则化AdaBoost (AdaBoost. sdm)在利用数据流形的同时,对标签的相似度和不相似度进行编码,提高了分类性能。我们的实验结果表明AdaBoost。SDM与最先进的流形正则化算法(包括LapRLS和LapSVM)具有很强的竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信