Fast Localized Twin SVM

Yanan Wang, Ying-jie Tian
{"title":"Fast Localized Twin SVM","authors":"Yanan Wang, Ying-jie Tian","doi":"10.1109/ICNC.2012.6234527","DOIUrl":null,"url":null,"abstract":"Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.","PeriodicalId":404981,"journal":{"name":"2012 8th International Conference on Natural Computation","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 8th International Conference on Natural Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNC.2012.6234527","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.
快速局部双支持向量机
双支持向量机(Twin Support Vector Machine, Twin SVM)是Jayadeva在2007年提出的一种新的二值分类器,是对支持向量机的扩展。由于其较低的计算成本和较好的泛化能力而受到学术界的广泛关注,并逐渐成为新的研究热点。孪生支持向量机的一个简单的几何解释是,每个超平面最接近自己类的点,并尽可能远离其他类的点。该方法通过求解两个相关的svm型问题来定义两个非平行超平面。局部支持向量机是一种基于双支持向量机的基于局部信息的分类方法,实验证明它比传统的双支持向量机具有更好的性能。然而,该方法的计算成本很高,实际应用很少。在本文中,我们提出了一种称为快速局部双支持向量机的方法,这是一种适合于大数据集的分类器,其中Twin SVM的数量减少。在快速局部孪生支持向量机中,我们首先使用训练集计算一组局部孪生支持向量机,然后将k个训练点的中心邻域内的所有点分配给每个局部模型。查询点依赖于它在训练集中的最近邻居可以被预测。经验实验表明,该方法不仅保证了较高的泛化能力,而且大大提高了计算成本,特别是对于大规模数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信