{"title":"Explainable AI for the diagnosis of neurodegenerative diseases: Unveiling methods, opportunities, and challenges","authors":"Alden Jenish S , Karthik R , Suganthi K","doi":"10.1016/j.cosrev.2025.100821","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial Intelligence (AI) has exhibited significant potential in diagnosis and operational efficiency across medical domains. Nevertheless, the opacity of the AI-driven diagnostic models creates a major roadblock to clinical deployment. Explainable Artificial Intelligence (XAI) techniques have emerged to improve physician trust and transparency in AI-based predictions by addressing interpretability and explainability. This review aims to explore and analyze recent advancements in XAI techniques applied to the diagnosis of Neurodegenerative Diseases (NDs). Based on their approaches toward interpretability, the included studies were categorized into model-agnostic and model-specific techniques. These interpretability techniques provide deeper insights into the factors influencing clinical diagnoses. The review examines various interpretative methods that enhance the transparency of AI-driven models, ensuring alignment with clinical decision-making. This summary reflects all major findings and critical analysis of the responses to the research questions posed. The next stage of analysis describes how XAI enhances model reliability and eases the clinical decision-making process. This review presents a cross-disease comparative evaluation of XAI techniques applied to major NDs such as Alzheimer’s Disease (AD), Parkinson’s Disease (PD), and Multiple Sclerosis (MS), offering a unified perspective on interpretability across modalities and disorders. This study explores existing approaches, highlights their strengths and limitations, and discusses future research directions in this domain.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100821"},"PeriodicalIF":12.7000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science Review","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574013725000978","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence (AI) has exhibited significant potential in diagnosis and operational efficiency across medical domains. Nevertheless, the opacity of the AI-driven diagnostic models creates a major roadblock to clinical deployment. Explainable Artificial Intelligence (XAI) techniques have emerged to improve physician trust and transparency in AI-based predictions by addressing interpretability and explainability. This review aims to explore and analyze recent advancements in XAI techniques applied to the diagnosis of Neurodegenerative Diseases (NDs). Based on their approaches toward interpretability, the included studies were categorized into model-agnostic and model-specific techniques. These interpretability techniques provide deeper insights into the factors influencing clinical diagnoses. The review examines various interpretative methods that enhance the transparency of AI-driven models, ensuring alignment with clinical decision-making. This summary reflects all major findings and critical analysis of the responses to the research questions posed. The next stage of analysis describes how XAI enhances model reliability and eases the clinical decision-making process. This review presents a cross-disease comparative evaluation of XAI techniques applied to major NDs such as Alzheimer’s Disease (AD), Parkinson’s Disease (PD), and Multiple Sclerosis (MS), offering a unified perspective on interpretability across modalities and disorders. This study explores existing approaches, highlights their strengths and limitations, and discusses future research directions in this domain.
期刊介绍:
Computer Science Review, a publication dedicated to research surveys and expository overviews of open problems in computer science, targets a broad audience within the field seeking comprehensive insights into the latest developments. The journal welcomes articles from various fields as long as their content impacts the advancement of computer science. In particular, articles that review the application of well-known Computer Science methods to other areas are in scope only if these articles advance the fundamental understanding of those methods.