{"title":"一种快速混合特征选择方法","authors":"Mohammad Ahmadi Ganjei, R. Boostani","doi":"10.1109/ICCKE48569.2019.8964884","DOIUrl":null,"url":null,"abstract":"To confront with high dimensional datasets, several feature selection schemes have been suggested in three types of wrapper, filter, and hybrid. Hybrid feature selection methods adopt both filter and wrapper approaches by compromising between the computational complexity and efficiency. In this paper, we proposed a new hybrid feature selection method, in which in the filter stage the features are ranked according to their relevance. Instead of running the wrapper on all the features, we use a split-to-blocks technique and show that block size has a considerable impact on performance. A sequential forward selection (SFS) method was applied to the ranked blocks of features in order to find the most relevant features. The proposed method rapidly eliminates a large number of irrelevant features in its ranking stage, and then different block sizes were evaluated in the wrapper phase by choosing a proper block size using SFS. It causes this method to have a low time complexity, despite the good results. Hybrid methods consist of components that have different criteria for them. we compare and analyze different criteria. To show the effectiveness of the proposed method, state-of-the-art hybrid feature selection methods like re-Ranking, IGIS, and IGIS+ were implemented and their classification accuracies, over the known benchmarks, were computed using the K-nearest neighbor (KNN) and decision tree classifiers. Applying statistical tests to the compared results supports the superiority of the proposed method to the counterparts.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"6-11"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Fast Hybrid Feature Selection Method\",\"authors\":\"Mohammad Ahmadi Ganjei, R. Boostani\",\"doi\":\"10.1109/ICCKE48569.2019.8964884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To confront with high dimensional datasets, several feature selection schemes have been suggested in three types of wrapper, filter, and hybrid. Hybrid feature selection methods adopt both filter and wrapper approaches by compromising between the computational complexity and efficiency. In this paper, we proposed a new hybrid feature selection method, in which in the filter stage the features are ranked according to their relevance. Instead of running the wrapper on all the features, we use a split-to-blocks technique and show that block size has a considerable impact on performance. A sequential forward selection (SFS) method was applied to the ranked blocks of features in order to find the most relevant features. The proposed method rapidly eliminates a large number of irrelevant features in its ranking stage, and then different block sizes were evaluated in the wrapper phase by choosing a proper block size using SFS. It causes this method to have a low time complexity, despite the good results. Hybrid methods consist of components that have different criteria for them. we compare and analyze different criteria. To show the effectiveness of the proposed method, state-of-the-art hybrid feature selection methods like re-Ranking, IGIS, and IGIS+ were implemented and their classification accuracies, over the known benchmarks, were computed using the K-nearest neighbor (KNN) and decision tree classifiers. Applying statistical tests to the compared results supports the superiority of the proposed method to the counterparts.\",\"PeriodicalId\":6685,\"journal\":{\"name\":\"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)\",\"volume\":\"1 1\",\"pages\":\"6-11\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCKE48569.2019.8964884\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCKE48569.2019.8964884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
To confront with high dimensional datasets, several feature selection schemes have been suggested in three types of wrapper, filter, and hybrid. Hybrid feature selection methods adopt both filter and wrapper approaches by compromising between the computational complexity and efficiency. In this paper, we proposed a new hybrid feature selection method, in which in the filter stage the features are ranked according to their relevance. Instead of running the wrapper on all the features, we use a split-to-blocks technique and show that block size has a considerable impact on performance. A sequential forward selection (SFS) method was applied to the ranked blocks of features in order to find the most relevant features. The proposed method rapidly eliminates a large number of irrelevant features in its ranking stage, and then different block sizes were evaluated in the wrapper phase by choosing a proper block size using SFS. It causes this method to have a low time complexity, despite the good results. Hybrid methods consist of components that have different criteria for them. we compare and analyze different criteria. To show the effectiveness of the proposed method, state-of-the-art hybrid feature selection methods like re-Ranking, IGIS, and IGIS+ were implemented and their classification accuracies, over the known benchmarks, were computed using the K-nearest neighbor (KNN) and decision tree classifiers. Applying statistical tests to the compared results supports the superiority of the proposed method to the counterparts.