基于决策树的支持向量机分类数据选择

A. L. Chau, L. López-García, Jair Cervantes, Xiaoou Li, Wen Yu
{"title":"基于决策树的支持向量机分类数据选择","authors":"A. L. Chau, L. López-García, Jair Cervantes, Xiaoou Li, Wen Yu","doi":"10.1109/ICTAI.2012.105","DOIUrl":null,"url":null,"abstract":"Support Vector Machine (SVM) is an important classification method used in a many areas. The training of SVM is almost O(n^{2}) in time and space. Some methods to reduce the training complexity have been proposed in last years. Data selection methods for SVM select most important examples from training data sets to improve its training time. This paper introduces a novel data reduction method that works detecting clusters and then selects some examples from them. Different from other state of the art algorithms, the novel method uses a decision tree to form partitions that are treated as clusters, and then executes a guided random selection of examples. The clusters discovered by a decision tree can be linearly separable, taking advantage of the Eidelheit separation theorem, it is possible to reduce the size of training sets by carefully selecting examples from training sets. The novel method was compared with LibSVM using public available data sets, experiments demonstrate an important reduction of the size of training sets whereas showing only a slight decreasing in the accuracy of classifier.","PeriodicalId":155588,"journal":{"name":"2012 IEEE 24th International Conference on Tools with Artificial Intelligence","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Data Selection Using Decision Tree for SVM Classification\",\"authors\":\"A. L. Chau, L. López-García, Jair Cervantes, Xiaoou Li, Wen Yu\",\"doi\":\"10.1109/ICTAI.2012.105\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Support Vector Machine (SVM) is an important classification method used in a many areas. The training of SVM is almost O(n^{2}) in time and space. Some methods to reduce the training complexity have been proposed in last years. Data selection methods for SVM select most important examples from training data sets to improve its training time. This paper introduces a novel data reduction method that works detecting clusters and then selects some examples from them. Different from other state of the art algorithms, the novel method uses a decision tree to form partitions that are treated as clusters, and then executes a guided random selection of examples. The clusters discovered by a decision tree can be linearly separable, taking advantage of the Eidelheit separation theorem, it is possible to reduce the size of training sets by carefully selecting examples from training sets. The novel method was compared with LibSVM using public available data sets, experiments demonstrate an important reduction of the size of training sets whereas showing only a slight decreasing in the accuracy of classifier.\",\"PeriodicalId\":155588,\"journal\":{\"name\":\"2012 IEEE 24th International Conference on Tools with Artificial Intelligence\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 24th International Conference on Tools with Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2012.105\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 24th International Conference on Tools with Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2012.105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

支持向量机(SVM)是一种重要的分类方法,应用于许多领域。SVM的训练在时间和空间上几乎为O(n^{2})。近年来,人们提出了一些降低训练复杂度的方法。支持向量机的数据选择方法从训练数据集中选择最重要的样本,以提高其训练时间。本文介绍了一种新的数据约简方法,该方法首先检测聚类,然后从聚类中选取一些实例。与其他先进算法不同的是,该方法使用决策树来形成被视为聚类的分区,然后执行引导随机选择示例。决策树发现的聚类可以是线性可分的,利用Eidelheit分离定理,可以通过从训练集中仔细选择样本来减小训练集的大小。将该方法与LibSVM进行了比较,实验表明,该方法大大减少了训练集的大小,而分类器的准确性仅略有下降。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Data Selection Using Decision Tree for SVM Classification
Support Vector Machine (SVM) is an important classification method used in a many areas. The training of SVM is almost O(n^{2}) in time and space. Some methods to reduce the training complexity have been proposed in last years. Data selection methods for SVM select most important examples from training data sets to improve its training time. This paper introduces a novel data reduction method that works detecting clusters and then selects some examples from them. Different from other state of the art algorithms, the novel method uses a decision tree to form partitions that are treated as clusters, and then executes a guided random selection of examples. The clusters discovered by a decision tree can be linearly separable, taking advantage of the Eidelheit separation theorem, it is possible to reduce the size of training sets by carefully selecting examples from training sets. The novel method was compared with LibSVM using public available data sets, experiments demonstrate an important reduction of the size of training sets whereas showing only a slight decreasing in the accuracy of classifier.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信