{"title":"Fast Localized Twin SVM","authors":"Yanan Wang, Ying-jie Tian","doi":"10.1109/ICNC.2012.6234527","DOIUrl":null,"url":null,"abstract":"Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.","PeriodicalId":404981,"journal":{"name":"2012 8th International Conference on Natural Computation","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 8th International Conference on Natural Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNC.2012.6234527","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Twin Support Vector Machine (Twin SVM), which is a new binary classifier as an extension of SVMs, was first proposed in 2007 by Jayadeva. Wide attention has been attracted by academic circles for its less computation cost and better generalization ability, and it became a new research priorities gradually. A simple geometric interpretation of Twin SVM is that each hyperplane is closest to the points of its own class and as far as possible from the points of the other class. This method defines two nonparallel hyper-planes by solving two related SVM-type problems. Localized Twin SVM is a classification approach via local information which is based on Twin SVM, and has been proved by experiments having a better performance than conventional Twin SVM. However, the computational cost of the method is so high that it has little practical applications. In this paper we propose a method called Fast Localized Twin SVM, a classifier built so as to be suitable for large data sets, in which the number of Twin SVMs is decreased. In Fast Localized Twin SVM, we first use the training set to compute a set of Localized Twin SVMs, then assign to each local model all the points lying in the central neighborhood of the k training points. The query point depending on its nearest neighbor in the training set can be predicted. From empirical experiments we can show that our approach not only guarantees high generalization ability but also improves the computational cost greatly, especially for large scale data sets.