A. L. Chau, L. López-García, Jair Cervantes, Xiaoou Li, Wen Yu
{"title":"基于决策树的支持向量机分类数据选择","authors":"A. L. Chau, L. López-García, Jair Cervantes, Xiaoou Li, Wen Yu","doi":"10.1109/ICTAI.2012.105","DOIUrl":null,"url":null,"abstract":"Support Vector Machine (SVM) is an important classification method used in a many areas. The training of SVM is almost O(n^{2}) in time and space. Some methods to reduce the training complexity have been proposed in last years. Data selection methods for SVM select most important examples from training data sets to improve its training time. This paper introduces a novel data reduction method that works detecting clusters and then selects some examples from them. Different from other state of the art algorithms, the novel method uses a decision tree to form partitions that are treated as clusters, and then executes a guided random selection of examples. The clusters discovered by a decision tree can be linearly separable, taking advantage of the Eidelheit separation theorem, it is possible to reduce the size of training sets by carefully selecting examples from training sets. The novel method was compared with LibSVM using public available data sets, experiments demonstrate an important reduction of the size of training sets whereas showing only a slight decreasing in the accuracy of classifier.","PeriodicalId":155588,"journal":{"name":"2012 IEEE 24th International Conference on Tools with Artificial Intelligence","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Data Selection Using Decision Tree for SVM Classification\",\"authors\":\"A. L. Chau, L. López-García, Jair Cervantes, Xiaoou Li, Wen Yu\",\"doi\":\"10.1109/ICTAI.2012.105\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Support Vector Machine (SVM) is an important classification method used in a many areas. The training of SVM is almost O(n^{2}) in time and space. Some methods to reduce the training complexity have been proposed in last years. Data selection methods for SVM select most important examples from training data sets to improve its training time. This paper introduces a novel data reduction method that works detecting clusters and then selects some examples from them. Different from other state of the art algorithms, the novel method uses a decision tree to form partitions that are treated as clusters, and then executes a guided random selection of examples. The clusters discovered by a decision tree can be linearly separable, taking advantage of the Eidelheit separation theorem, it is possible to reduce the size of training sets by carefully selecting examples from training sets. The novel method was compared with LibSVM using public available data sets, experiments demonstrate an important reduction of the size of training sets whereas showing only a slight decreasing in the accuracy of classifier.\",\"PeriodicalId\":155588,\"journal\":{\"name\":\"2012 IEEE 24th International Conference on Tools with Artificial Intelligence\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 24th International Conference on Tools with Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2012.105\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 24th International Conference on Tools with Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2012.105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Data Selection Using Decision Tree for SVM Classification
Support Vector Machine (SVM) is an important classification method used in a many areas. The training of SVM is almost O(n^{2}) in time and space. Some methods to reduce the training complexity have been proposed in last years. Data selection methods for SVM select most important examples from training data sets to improve its training time. This paper introduces a novel data reduction method that works detecting clusters and then selects some examples from them. Different from other state of the art algorithms, the novel method uses a decision tree to form partitions that are treated as clusters, and then executes a guided random selection of examples. The clusters discovered by a decision tree can be linearly separable, taking advantage of the Eidelheit separation theorem, it is possible to reduce the size of training sets by carefully selecting examples from training sets. The novel method was compared with LibSVM using public available data sets, experiments demonstrate an important reduction of the size of training sets whereas showing only a slight decreasing in the accuracy of classifier.