一种新的基于条件熵的多标签特征选择方法及其加速机制

IF 3.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chengwei Liao, Bin Yang
{"title":"一种新的基于条件熵的多标签特征选择方法及其加速机制","authors":"Chengwei Liao,&nbsp;Bin Yang","doi":"10.1016/j.ijar.2025.109469","DOIUrl":null,"url":null,"abstract":"<div><div>In multi-label learning, feature selection is a crucial step for enhancing model performance and reducing computational complexity. However, due to the interdependence among labels and the high dimensionality of feature sets, traditional single-label feature selection methods often underperform in multi-label scenarios. Moreover, many existing feature selection methods typically require a comprehensive evaluation of all features and samples in each iteration, resulting in high computational complexity. To address this issue, this paper proposes a feature selection algorithm based on fuzzy conditional entropy within the framework of fuzzy rough set theory. The method gradually identifies optimal features through iterative optimization and systematically filters out features and samples that do not contribute to the current feature subset. Specifically, the filtered features and samples are incorporated into redundant feature and sample sets, thereby dynamically excluding these redundant elements in subsequent iterations and avoiding unnecessary computations. Experiments conducted on 10 multi-label datasets demonstrate that the proposed algorithm outperforms eight other methods in terms of performance.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109469"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel multi-label feature selection method based on conditional entropy and its acceleration mechanism\",\"authors\":\"Chengwei Liao,&nbsp;Bin Yang\",\"doi\":\"10.1016/j.ijar.2025.109469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In multi-label learning, feature selection is a crucial step for enhancing model performance and reducing computational complexity. However, due to the interdependence among labels and the high dimensionality of feature sets, traditional single-label feature selection methods often underperform in multi-label scenarios. Moreover, many existing feature selection methods typically require a comprehensive evaluation of all features and samples in each iteration, resulting in high computational complexity. To address this issue, this paper proposes a feature selection algorithm based on fuzzy conditional entropy within the framework of fuzzy rough set theory. The method gradually identifies optimal features through iterative optimization and systematically filters out features and samples that do not contribute to the current feature subset. Specifically, the filtered features and samples are incorporated into redundant feature and sample sets, thereby dynamically excluding these redundant elements in subsequent iterations and avoiding unnecessary computations. Experiments conducted on 10 multi-label datasets demonstrate that the proposed algorithm outperforms eight other methods in terms of performance.</div></div>\",\"PeriodicalId\":13842,\"journal\":{\"name\":\"International Journal of Approximate Reasoning\",\"volume\":\"185 \",\"pages\":\"Article 109469\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Approximate Reasoning\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0888613X25001100\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Approximate Reasoning","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0888613X25001100","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在多标签学习中,特征选择是提高模型性能和降低计算复杂度的关键步骤。然而,由于标签之间的相互依赖性和特征集的高维性,传统的单标签特征选择方法在多标签场景下往往表现不佳。此外,现有的许多特征选择方法通常需要在每次迭代中对所有特征和样本进行综合评估,这导致了较高的计算复杂度。针对这一问题,本文在模糊粗糙集理论框架下提出了一种基于模糊条件熵的特征选择算法。该方法通过迭代优化逐步识别出最优特征,并系统地过滤掉对当前特征子集没有贡献的特征和样本。具体而言,将过滤后的特征和样本纳入冗余特征和样本集,从而在后续迭代中动态排除这些冗余元素,避免不必要的计算。在10个多标签数据集上进行的实验表明,该算法在性能方面优于其他8种方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A novel multi-label feature selection method based on conditional entropy and its acceleration mechanism
In multi-label learning, feature selection is a crucial step for enhancing model performance and reducing computational complexity. However, due to the interdependence among labels and the high dimensionality of feature sets, traditional single-label feature selection methods often underperform in multi-label scenarios. Moreover, many existing feature selection methods typically require a comprehensive evaluation of all features and samples in each iteration, resulting in high computational complexity. To address this issue, this paper proposes a feature selection algorithm based on fuzzy conditional entropy within the framework of fuzzy rough set theory. The method gradually identifies optimal features through iterative optimization and systematically filters out features and samples that do not contribute to the current feature subset. Specifically, the filtered features and samples are incorporated into redundant feature and sample sets, thereby dynamically excluding these redundant elements in subsequent iterations and avoiding unnecessary computations. Experiments conducted on 10 multi-label datasets demonstrate that the proposed algorithm outperforms eight other methods in terms of performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Approximate Reasoning
International Journal of Approximate Reasoning 工程技术-计算机:人工智能
CiteScore
6.90
自引率
12.80%
发文量
170
审稿时长
67 days
期刊介绍: The International Journal of Approximate Reasoning is intended to serve as a forum for the treatment of imprecision and uncertainty in Artificial and Computational Intelligence, covering both the foundations of uncertainty theories, and the design of intelligent systems for scientific and engineering applications. It publishes high-quality research papers describing theoretical developments or innovative applications, as well as review articles on topics of general interest. Relevant topics include, but are not limited to, probabilistic reasoning and Bayesian networks, imprecise probabilities, random sets, belief functions (Dempster-Shafer theory), possibility theory, fuzzy sets, rough sets, decision theory, non-additive measures and integrals, qualitative reasoning about uncertainty, comparative probability orderings, game-theoretic probability, default reasoning, nonstandard logics, argumentation systems, inconsistency tolerant reasoning, elicitation techniques, philosophical foundations and psychological models of uncertain reasoning. Domains of application for uncertain reasoning systems include risk analysis and assessment, information retrieval and database design, information fusion, machine learning, data and web mining, computer vision, image and signal processing, intelligent data analysis, statistics, multi-agent systems, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信