Shagufta Iftikhar, Nadeem Anjum, Abdul Basit Siddiqui, Masood Ur Rehman, Naeem Ramzan
{"title":"基于XAI关键特征识别的可解释CNN脑肿瘤检测与分类。","authors":"Shagufta Iftikhar, Nadeem Anjum, Abdul Basit Siddiqui, Masood Ur Rehman, Naeem Ramzan","doi":"10.1186/s40708-025-00257-y","DOIUrl":null,"url":null,"abstract":"<p><p>Despite significant advancements in brain tumor classification, many existing models suffer from complex structures that make them difficult to interpret. This complexity can hinder the transparency of the decision-making process, causing models to rely on irrelevant features or normal soft tissues. Besides, these models often include additional layers and parameters, which further complicate the classification process. Our work addresses these limitations by introducing a novel methodology that combines Explainable AI (XAI) techniques with a Convolutional Neural Network (CNN) architecture. The major contribution of this paper is ensuring that the model focuses on the most relevant features for tumor detection and classification, while simultaneously reducing complexity, by minimizing the number of layers. This approach enhances the model's transparency and robustness, giving clear insights into its decision-making process through XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-Cam), Shapley Additive explanations (Shap), and Local Interpretable Model-agnostic Explanations (LIME). Additionally, the approach demonstrates better performance, achieving 99% accuracy on seen data and 95% on unseen data, highlighting its generalizability and reliability. This balance of simplicity, interpretability, and high accuracy represents a significant advancement in the classification of brain tumor.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"12 1","pages":"10"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12044100/pdf/","citationCount":"0","resultStr":"{\"title\":\"Explainable CNN for brain tumor detection and classification through XAI based key features identification.\",\"authors\":\"Shagufta Iftikhar, Nadeem Anjum, Abdul Basit Siddiqui, Masood Ur Rehman, Naeem Ramzan\",\"doi\":\"10.1186/s40708-025-00257-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Despite significant advancements in brain tumor classification, many existing models suffer from complex structures that make them difficult to interpret. This complexity can hinder the transparency of the decision-making process, causing models to rely on irrelevant features or normal soft tissues. Besides, these models often include additional layers and parameters, which further complicate the classification process. Our work addresses these limitations by introducing a novel methodology that combines Explainable AI (XAI) techniques with a Convolutional Neural Network (CNN) architecture. The major contribution of this paper is ensuring that the model focuses on the most relevant features for tumor detection and classification, while simultaneously reducing complexity, by minimizing the number of layers. This approach enhances the model's transparency and robustness, giving clear insights into its decision-making process through XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-Cam), Shapley Additive explanations (Shap), and Local Interpretable Model-agnostic Explanations (LIME). Additionally, the approach demonstrates better performance, achieving 99% accuracy on seen data and 95% on unseen data, highlighting its generalizability and reliability. This balance of simplicity, interpretability, and high accuracy represents a significant advancement in the classification of brain tumor.</p>\",\"PeriodicalId\":37465,\"journal\":{\"name\":\"Brain Informatics\",\"volume\":\"12 1\",\"pages\":\"10\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12044100/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Brain Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s40708-025-00257-y\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s40708-025-00257-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
Explainable CNN for brain tumor detection and classification through XAI based key features identification.
Despite significant advancements in brain tumor classification, many existing models suffer from complex structures that make them difficult to interpret. This complexity can hinder the transparency of the decision-making process, causing models to rely on irrelevant features or normal soft tissues. Besides, these models often include additional layers and parameters, which further complicate the classification process. Our work addresses these limitations by introducing a novel methodology that combines Explainable AI (XAI) techniques with a Convolutional Neural Network (CNN) architecture. The major contribution of this paper is ensuring that the model focuses on the most relevant features for tumor detection and classification, while simultaneously reducing complexity, by minimizing the number of layers. This approach enhances the model's transparency and robustness, giving clear insights into its decision-making process through XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-Cam), Shapley Additive explanations (Shap), and Local Interpretable Model-agnostic Explanations (LIME). Additionally, the approach demonstrates better performance, achieving 99% accuracy on seen data and 95% on unseen data, highlighting its generalizability and reliability. This balance of simplicity, interpretability, and high accuracy represents a significant advancement in the classification of brain tumor.
期刊介绍:
Brain Informatics is an international, peer-reviewed, interdisciplinary open-access journal published under the brand SpringerOpen, which provides a unique platform for researchers and practitioners to disseminate original research on computational and informatics technologies related to brain. This journal addresses the computational, cognitive, physiological, biological, physical, ecological and social perspectives of brain informatics. It also welcomes emerging information technologies and advanced neuro-imaging technologies, such as big data analytics and interactive knowledge discovery related to various large-scale brain studies and their applications. This journal will publish high-quality original research papers, brief reports and critical reviews in all theoretical, technological, clinical and interdisciplinary studies that make up the field of brain informatics and its applications in brain-machine intelligence, brain-inspired intelligent systems, mental health and brain disorders, etc. The scope of papers includes the following five tracks: Track 1: Cognitive and Computational Foundations of Brain Science Track 2: Human Information Processing Systems Track 3: Brain Big Data Analytics, Curation and Management Track 4: Informatics Paradigms for Brain and Mental Health Research Track 5: Brain-Machine Intelligence and Brain-Inspired Computing