{"title":"快速局部双支持向量机","authors":"Yanan Wang, Ying-jie Tian","doi":"10.1109/ICNC.2012.6234527","DOIUrl":null,"url":null,"abstract":"Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.","PeriodicalId":404981,"journal":{"name":"2012 8th International Conference on Natural Computation","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Fast Localized Twin SVM\",\"authors\":\"Yanan Wang, Ying-jie Tian\",\"doi\":\"10.1109/ICNC.2012.6234527\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.\",\"PeriodicalId\":404981,\"journal\":{\"name\":\"2012 8th International Conference on Natural Computation\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-05-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 8th International Conference on Natural Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNC.2012.6234527\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 8th International Conference on Natural Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNC.2012.6234527","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
摘要
双支持向量机(Twin Support Vector Machine, Twin SVM)是Jayadeva在2007年提出的一种新的二值分类器,是对支持向量机的扩展。由于其较低的计算成本和较好的泛化能力而受到学术界的广泛关注,并逐渐成为新的研究热点。孪生支持向量机的一个简单的几何解释是,每个超平面最接近自己类的点,并尽可能远离其他类的点。该方法通过求解两个相关的svm型问题来定义两个非平行超平面。局部支持向量机是一种基于双支持向量机的基于局部信息的分类方法,实验证明它比传统的双支持向量机具有更好的性能。然而,该方法的计算成本很高,实际应用很少。在本文中,我们提出了一种称为快速局部双支持向量机的方法,这是一种适合于大数据集的分类器,其中Twin SVM的数量减少。在快速局部孪生支持向量机中,我们首先使用训练集计算一组局部孪生支持向量机,然后将k个训练点的中心邻域内的所有点分配给每个局部模型。查询点依赖于它在训练集中的最近邻居可以被预测。经验实验表明,该方法不仅保证了较高的泛化能力,而且大大提高了计算成本,特别是对于大规模数据集。
Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.