特征选择稳定性和分类准确性的经验评估

IF 1 Q3 MULTIDISCIPLINARY SCIENCES
Mustafa Büyükkeçeci̇, M. C. Okur
{"title":"特征选择稳定性和分类准确性的经验评估","authors":"Mustafa Büyükkeçeci̇, M. C. Okur","doi":"10.35378/gujs.998964","DOIUrl":null,"url":null,"abstract":"The performance of inductive learners can be negatively affected by high-dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson Product-Moment Correlation Coefficient. We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real-world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.","PeriodicalId":12615,"journal":{"name":"gazi university journal of science","volume":"8 1","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Empirical Evaluation of Feature Selection Stability and Classification Accuracy\",\"authors\":\"Mustafa Büyükkeçeci̇, M. C. Okur\",\"doi\":\"10.35378/gujs.998964\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The performance of inductive learners can be negatively affected by high-dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson Product-Moment Correlation Coefficient. We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real-world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.\",\"PeriodicalId\":12615,\"journal\":{\"name\":\"gazi university journal of science\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2023-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"gazi university journal of science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.35378/gujs.998964\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"gazi university journal of science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.35378/gujs.998964","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

归纳学习器的性能可能会受到高维数据集的负面影响。为了解决这个问题,我们使用了特征选择方法。选择相关特征并降低数据维度对于建立精确的机器学习模型至关重要。稳定性是特征选择的一个重要标准。即使训练集中存在微小变化,稳定的特征选择算法也能保持其特征偏好。研究强调了稳定特征选择的重要性,尤其是在样本数量少、维度高的情况下。在本研究中,我们使用皮尔逊积幂相关系数评估了稳定性指标以及特征选择稳定性与分类准确性之间的关系。我们使用五种过滤特征选择方法和两种包装特征选择方法、三种分类器(用于子集和分类性能评估)以及取自两个不同数据存储库的八个真实世界数据集进行了一系列广泛的实验。我们共使用了 12 个稳定性指标来衡量特征选择方法的稳定性。根据相关性分析的结果,我们发现缺乏实质性证据支持特征选择稳定性与分类准确性之间的线性关系。不过,我们观察到多个稳定性指标之间存在很强的正相关性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Empirical Evaluation of Feature Selection Stability and Classification Accuracy
The performance of inductive learners can be negatively affected by high-dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson Product-Moment Correlation Coefficient. We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real-world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
gazi university journal of science
gazi university journal of science MULTIDISCIPLINARY SCIENCES-
CiteScore
1.60
自引率
11.10%
发文量
87
期刊介绍: The scope of the “Gazi University Journal of Science” comprises such as original research on all aspects of basic science, engineering and technology. Original research results, scientific reviews and short communication notes in various fields of science and technology are considered for publication. The publication language of the journal is English. Manuscripts previously published in another journal are not accepted. Manuscripts with a suitable balance of practice and theory are preferred. A review article is expected to give in-depth information and satisfying evaluation of a specific scientific or technologic subject, supported with an extensive list of sources. Short communication notes prepared by researchers who would like to share the first outcomes of their on-going, original research work are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信