基于马尔可夫聚类和最大生成树的高维数据特征选择

Neha Bisht, Annappa Basava
{"title":"基于马尔可夫聚类和最大生成树的高维数据特征选择","authors":"Neha Bisht, Annappa Basava","doi":"10.1109/IC3.2016.7880208","DOIUrl":null,"url":null,"abstract":"Feature selection is the most important preprocessing step for classification of high dimensional data. It reduces the load of computational cost and prediction time on classification algorithm by selecting only the salient features from the data set for learning. The main challenges while applying feature selection on high dimensional data (HDD) are: handling the relevancy, redundancy and correlation between features. The proposed algorithm works with the three main steps to overcome these issues. It focuses on filtering strategy for its effectiveness in handling the data sets with large size and high dimensions. Initially to measure the relevancy of features with respect to class, fisher score is calculated for each feature independently. Next, only relevant features are passed to the clustering algorithm to check the redundancy of features. Finally the correlation between features is calculated using maximum spanning tree and the most appropriate features are filtered out. The classification accuracy of the presented approach is validated by using C4.5, IB1 and Naive Bayes classifier. The proposed algorithm gives high classification accuracy when compared against the accuracies given by three different classifiers on the datasets containing features extracted from fisher score method and dataset containing all the features or full-featured dataset.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Feature selection using Markov clustering and maximum spanning tree in high dimensional data\",\"authors\":\"Neha Bisht, Annappa Basava\",\"doi\":\"10.1109/IC3.2016.7880208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Feature selection is the most important preprocessing step for classification of high dimensional data. It reduces the load of computational cost and prediction time on classification algorithm by selecting only the salient features from the data set for learning. The main challenges while applying feature selection on high dimensional data (HDD) are: handling the relevancy, redundancy and correlation between features. The proposed algorithm works with the three main steps to overcome these issues. It focuses on filtering strategy for its effectiveness in handling the data sets with large size and high dimensions. Initially to measure the relevancy of features with respect to class, fisher score is calculated for each feature independently. Next, only relevant features are passed to the clustering algorithm to check the redundancy of features. Finally the correlation between features is calculated using maximum spanning tree and the most appropriate features are filtered out. The classification accuracy of the presented approach is validated by using C4.5, IB1 and Naive Bayes classifier. The proposed algorithm gives high classification accuracy when compared against the accuracies given by three different classifiers on the datasets containing features extracted from fisher score method and dataset containing all the features or full-featured dataset.\",\"PeriodicalId\":294210,\"journal\":{\"name\":\"2016 Ninth International Conference on Contemporary Computing (IC3)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Ninth International Conference on Contemporary Computing (IC3)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3.2016.7880208\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Ninth International Conference on Contemporary Computing (IC3)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3.2016.7880208","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

特征选择是高维数据分类最重要的预处理步骤。它通过只选择数据集中的显著特征进行学习,减少了分类算法的计算量和预测时间。在高维数据(HDD)上应用特征选择的主要挑战是:处理特征之间的相关性、冗余性和相关性。提出的算法通过三个主要步骤来克服这些问题。重点介绍了过滤策略在处理大尺寸、高维数据集方面的有效性。为了衡量特征相对于类别的相关性,首先对每个特征独立计算fisher分数。然后,只将相关的特征传递给聚类算法,以检查特征的冗余性。最后利用最大生成树计算特征之间的相关性,过滤出最合适的特征。采用C4.5、IB1和朴素贝叶斯分类器对该方法的分类精度进行了验证。在包含fisher评分法提取的特征数据集和包含所有特征数据集或全特征数据集的数据集上,与三种不同分类器的分类准确率相比,该算法具有较高的分类准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Feature selection using Markov clustering and maximum spanning tree in high dimensional data
Feature selection is the most important preprocessing step for classification of high dimensional data. It reduces the load of computational cost and prediction time on classification algorithm by selecting only the salient features from the data set for learning. The main challenges while applying feature selection on high dimensional data (HDD) are: handling the relevancy, redundancy and correlation between features. The proposed algorithm works with the three main steps to overcome these issues. It focuses on filtering strategy for its effectiveness in handling the data sets with large size and high dimensions. Initially to measure the relevancy of features with respect to class, fisher score is calculated for each feature independently. Next, only relevant features are passed to the clustering algorithm to check the redundancy of features. Finally the correlation between features is calculated using maximum spanning tree and the most appropriate features are filtered out. The classification accuracy of the presented approach is validated by using C4.5, IB1 and Naive Bayes classifier. The proposed algorithm gives high classification accuracy when compared against the accuracies given by three different classifiers on the datasets containing features extracted from fisher score method and dataset containing all the features or full-featured dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信