{"title":"Improving the local diagnostic explanations of diabetes mellitus with the ensemble of label noise filters","authors":"Che Xu, Peng Zhu, Jiacun Wang, Giancarlo Fortino","doi":"10.1016/j.inffus.2025.102928","DOIUrl":null,"url":null,"abstract":"In the era of big data, accurately diagnosing diabetes mellitus (DM) often requires fusing diverse types of information. Machine learning has emerged as a prevalent approach to achieve this. Despite its potential, clinical acceptance remains limited, primarily due to the lack of explainability in diagnostic predictions. The emergence of explainable artificial intelligence (XAI) offers a promising solution, yet both explainable and non-explainable models rely heavily on noise-free datasets. Label noise filters (LNFs) have been designed to enhance dataset quality by identifying and removing mislabeled samples, which can improve the predictive performance of diagnostic models. However, the impact of label noise on diagnostic explanations remains unexplored. To address this issue, this paper proposes an ensemble framework for LNFs that fuses information from different LNFs through three phases. In the first phase, a diverse pool of LNFs is generated. Second, the widely-used LIME (Local Interpretable Model-Agnostic Explanations) technique is employed to provide local explainability for diagnostic predictions made by black-box models. Finally, four ensemble strategies are designed to generate the final local diagnostic explanations for DM patients. The theoretical advantage of the ensemble is also demonstrated. The proposed framework is comprehensively evaluated on four DM datasets to assess its ability to mitigate the adverse impact of label noise on diagnostic explanations, compared to 24 baseline LNFs. Experimental results demonstrate that individual LNFs fail to consistently ensure the quality of diagnostic explanations, whereas the LNF ensemble based on local explanations provides a feasible solution to this challenge.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"14 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.inffus.2025.102928","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In the era of big data, accurately diagnosing diabetes mellitus (DM) often requires fusing diverse types of information. Machine learning has emerged as a prevalent approach to achieve this. Despite its potential, clinical acceptance remains limited, primarily due to the lack of explainability in diagnostic predictions. The emergence of explainable artificial intelligence (XAI) offers a promising solution, yet both explainable and non-explainable models rely heavily on noise-free datasets. Label noise filters (LNFs) have been designed to enhance dataset quality by identifying and removing mislabeled samples, which can improve the predictive performance of diagnostic models. However, the impact of label noise on diagnostic explanations remains unexplored. To address this issue, this paper proposes an ensemble framework for LNFs that fuses information from different LNFs through three phases. In the first phase, a diverse pool of LNFs is generated. Second, the widely-used LIME (Local Interpretable Model-Agnostic Explanations) technique is employed to provide local explainability for diagnostic predictions made by black-box models. Finally, four ensemble strategies are designed to generate the final local diagnostic explanations for DM patients. The theoretical advantage of the ensemble is also demonstrated. The proposed framework is comprehensively evaluated on four DM datasets to assess its ability to mitigate the adverse impact of label noise on diagnostic explanations, compared to 24 baseline LNFs. Experimental results demonstrate that individual LNFs fail to consistently ensure the quality of diagnostic explanations, whereas the LNF ensemble based on local explanations provides a feasible solution to this challenge.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.