Incremental Learning for Transductive SVMs

Boliang Sun, Yan Li, Xingchen Hu, Guangquan Cheng, Chao Chen, Zhong Liu
{"title":"Incremental Learning for Transductive SVMs","authors":"Boliang Sun, Yan Li, Xingchen Hu, Guangquan Cheng, Chao Chen, Zhong Liu","doi":"10.1109/ISKE47853.2019.9170294","DOIUrl":null,"url":null,"abstract":"A new incremental transductive SVMs framework dependeding on duality is put forward for constrained optimization issues. Based on the weak duality theorem, the procedure of incremental learning is simplified for the task dual function increment. Based on this, two incremental learning methods are developed by updating limited dual parameters: (1) aggressive dual ascending; (2) local concave-convex procedure (LCCCP). Experiments demonstrated that our methods achieve comparable risk and accuracy to batch TSVMs, with less time consumption and memory requirment. Besides, our incremental learning methods can cope with concept drift and maintain smaller error rate than batch learning methods. The design and analysis of incremental semi-supervised learning methods is fully discussed in this research.","PeriodicalId":399084,"journal":{"name":"2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISKE47853.2019.9170294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A new incremental transductive SVMs framework dependeding on duality is put forward for constrained optimization issues. Based on the weak duality theorem, the procedure of incremental learning is simplified for the task dual function increment. Based on this, two incremental learning methods are developed by updating limited dual parameters: (1) aggressive dual ascending; (2) local concave-convex procedure (LCCCP). Experiments demonstrated that our methods achieve comparable risk and accuracy to batch TSVMs, with less time consumption and memory requirment. Besides, our incremental learning methods can cope with concept drift and maintain smaller error rate than batch learning methods. The design and analysis of incremental semi-supervised learning methods is fully discussed in this research.
转换支持向量机的增量学习
针对约束优化问题,提出了一种基于对偶的增量转换支持向量机框架。基于弱对偶定理,简化了任务对偶函数增量的增量学习过程。在此基础上,通过更新有限对偶参数,提出了两种增量学习方法:(1)主动对偶上升;(2)局部凹凸过程(LCCCP)。实验表明,我们的方法具有与批量tsvm相当的风险和准确性,并且具有更少的时间消耗和内存需求。此外,我们的增量学习方法可以应对概念漂移,并且保持比批处理学习方法更小的错误率。本研究对增量式半监督学习方法的设计与分析进行了充分的探讨。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信