{"title":"一种新的基于条件熵的多标签特征选择方法及其加速机制","authors":"Chengwei Liao, Bin Yang","doi":"10.1016/j.ijar.2025.109469","DOIUrl":null,"url":null,"abstract":"<div><div>In multi-label learning, feature selection is a crucial step for enhancing model performance and reducing computational complexity. However, due to the interdependence among labels and the high dimensionality of feature sets, traditional single-label feature selection methods often underperform in multi-label scenarios. Moreover, many existing feature selection methods typically require a comprehensive evaluation of all features and samples in each iteration, resulting in high computational complexity. To address this issue, this paper proposes a feature selection algorithm based on fuzzy conditional entropy within the framework of fuzzy rough set theory. The method gradually identifies optimal features through iterative optimization and systematically filters out features and samples that do not contribute to the current feature subset. Specifically, the filtered features and samples are incorporated into redundant feature and sample sets, thereby dynamically excluding these redundant elements in subsequent iterations and avoiding unnecessary computations. Experiments conducted on 10 multi-label datasets demonstrate that the proposed algorithm outperforms eight other methods in terms of performance.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109469"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel multi-label feature selection method based on conditional entropy and its acceleration mechanism\",\"authors\":\"Chengwei Liao, Bin Yang\",\"doi\":\"10.1016/j.ijar.2025.109469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In multi-label learning, feature selection is a crucial step for enhancing model performance and reducing computational complexity. However, due to the interdependence among labels and the high dimensionality of feature sets, traditional single-label feature selection methods often underperform in multi-label scenarios. Moreover, many existing feature selection methods typically require a comprehensive evaluation of all features and samples in each iteration, resulting in high computational complexity. To address this issue, this paper proposes a feature selection algorithm based on fuzzy conditional entropy within the framework of fuzzy rough set theory. The method gradually identifies optimal features through iterative optimization and systematically filters out features and samples that do not contribute to the current feature subset. Specifically, the filtered features and samples are incorporated into redundant feature and sample sets, thereby dynamically excluding these redundant elements in subsequent iterations and avoiding unnecessary computations. Experiments conducted on 10 multi-label datasets demonstrate that the proposed algorithm outperforms eight other methods in terms of performance.</div></div>\",\"PeriodicalId\":13842,\"journal\":{\"name\":\"International Journal of Approximate Reasoning\",\"volume\":\"185 \",\"pages\":\"Article 109469\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Approximate Reasoning\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0888613X25001100\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Approximate Reasoning","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0888613X25001100","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A novel multi-label feature selection method based on conditional entropy and its acceleration mechanism
In multi-label learning, feature selection is a crucial step for enhancing model performance and reducing computational complexity. However, due to the interdependence among labels and the high dimensionality of feature sets, traditional single-label feature selection methods often underperform in multi-label scenarios. Moreover, many existing feature selection methods typically require a comprehensive evaluation of all features and samples in each iteration, resulting in high computational complexity. To address this issue, this paper proposes a feature selection algorithm based on fuzzy conditional entropy within the framework of fuzzy rough set theory. The method gradually identifies optimal features through iterative optimization and systematically filters out features and samples that do not contribute to the current feature subset. Specifically, the filtered features and samples are incorporated into redundant feature and sample sets, thereby dynamically excluding these redundant elements in subsequent iterations and avoiding unnecessary computations. Experiments conducted on 10 multi-label datasets demonstrate that the proposed algorithm outperforms eight other methods in terms of performance.
期刊介绍:
The International Journal of Approximate Reasoning is intended to serve as a forum for the treatment of imprecision and uncertainty in Artificial and Computational Intelligence, covering both the foundations of uncertainty theories, and the design of intelligent systems for scientific and engineering applications. It publishes high-quality research papers describing theoretical developments or innovative applications, as well as review articles on topics of general interest.
Relevant topics include, but are not limited to, probabilistic reasoning and Bayesian networks, imprecise probabilities, random sets, belief functions (Dempster-Shafer theory), possibility theory, fuzzy sets, rough sets, decision theory, non-additive measures and integrals, qualitative reasoning about uncertainty, comparative probability orderings, game-theoretic probability, default reasoning, nonstandard logics, argumentation systems, inconsistency tolerant reasoning, elicitation techniques, philosophical foundations and psychological models of uncertain reasoning.
Domains of application for uncertain reasoning systems include risk analysis and assessment, information retrieval and database design, information fusion, machine learning, data and web mining, computer vision, image and signal processing, intelligent data analysis, statistics, multi-agent systems, etc.