{"title":"基于秩聚集的特征选择方法比较","authors":"Wanwan Zheng, Mingzhe Jin","doi":"10.1109/ICTKE.2018.8612429","DOIUrl":null,"url":null,"abstract":"Feature selection (FS) is becoming critical in this data era. Selecting effective features from datasets is a particularly important part in text classification, data mining, pattern recognition and artificial intelligence. FS excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, allows us to better understand data, improves the performance of machine learning techniques, and minimizes the computation requirement. Thus far, a large number of FS methods have been proposed, however the most effective one in practice remains unclear. Though it is conceivable that different categories of FS methods have different evaluation criteria for variables, there are few studies fixating on evaluating various categories of FS methods. This article gathers ten superior FS methods under four different categories, and fixates on evaluating and comparing them in general versatility (constant ability to select out the useful features) regarding authorship attribution problems. Besides, this article tries to identify which method is most effective. SVM (support vector machine) serves as the classifier. Different categories of features, different numbers of top variables in feature rankings, and different performance measures are employed to measure the effectiveness and general versatility of these methods together. Finally, rank aggregation method Schulze (SSD) is employed to make a ranking of the ten FS methods. The analysis results suggest that Mahalanobis distance is the best method on the whole.","PeriodicalId":342802,"journal":{"name":"2018 16th International Conference on ICT and Knowledge Engineering (ICT&KE)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Comparing Feature Selection Methods by Using Rank Aggregation\",\"authors\":\"Wanwan Zheng, Mingzhe Jin\",\"doi\":\"10.1109/ICTKE.2018.8612429\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Feature selection (FS) is becoming critical in this data era. Selecting effective features from datasets is a particularly important part in text classification, data mining, pattern recognition and artificial intelligence. FS excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, allows us to better understand data, improves the performance of machine learning techniques, and minimizes the computation requirement. Thus far, a large number of FS methods have been proposed, however the most effective one in practice remains unclear. Though it is conceivable that different categories of FS methods have different evaluation criteria for variables, there are few studies fixating on evaluating various categories of FS methods. This article gathers ten superior FS methods under four different categories, and fixates on evaluating and comparing them in general versatility (constant ability to select out the useful features) regarding authorship attribution problems. Besides, this article tries to identify which method is most effective. SVM (support vector machine) serves as the classifier. Different categories of features, different numbers of top variables in feature rankings, and different performance measures are employed to measure the effectiveness and general versatility of these methods together. Finally, rank aggregation method Schulze (SSD) is employed to make a ranking of the ten FS methods. The analysis results suggest that Mahalanobis distance is the best method on the whole.\",\"PeriodicalId\":342802,\"journal\":{\"name\":\"2018 16th International Conference on ICT and Knowledge Engineering (ICT&KE)\",\"volume\":\"126 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 16th International Conference on ICT and Knowledge Engineering (ICT&KE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTKE.2018.8612429\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 16th International Conference on ICT and Knowledge Engineering (ICT&KE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTKE.2018.8612429","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparing Feature Selection Methods by Using Rank Aggregation
Feature selection (FS) is becoming critical in this data era. Selecting effective features from datasets is a particularly important part in text classification, data mining, pattern recognition and artificial intelligence. FS excludes irrelevant features from the classification task, reduces the dimensionality of a dataset, allows us to better understand data, improves the performance of machine learning techniques, and minimizes the computation requirement. Thus far, a large number of FS methods have been proposed, however the most effective one in practice remains unclear. Though it is conceivable that different categories of FS methods have different evaluation criteria for variables, there are few studies fixating on evaluating various categories of FS methods. This article gathers ten superior FS methods under four different categories, and fixates on evaluating and comparing them in general versatility (constant ability to select out the useful features) regarding authorship attribution problems. Besides, this article tries to identify which method is most effective. SVM (support vector machine) serves as the classifier. Different categories of features, different numbers of top variables in feature rankings, and different performance measures are employed to measure the effectiveness and general versatility of these methods together. Finally, rank aggregation method Schulze (SSD) is employed to make a ranking of the ten FS methods. The analysis results suggest that Mahalanobis distance is the best method on the whole.