Ali Kohan, Amir Zahedi, Roohallah Alizadehsani, Ru‐San Tan, U. Rajendra Acharya
{"title":"Application of Explainable Artificial Intelligence (XAI) Techniques in Patients With Intracranial Hemorrhage: A Systematic Review","authors":"Ali Kohan, Amir Zahedi, Roohallah Alizadehsani, Ru‐San Tan, U. Rajendra Acharya","doi":"10.1002/widm.70031","DOIUrl":null,"url":null,"abstract":"Intracranial hemorrhage (IH) is a critical condition requiring rapid and accurate diagnosis to ensure effective treatment and reduce mortality rates. Recently, artificial intelligence (AI) models have demonstrated significant potential in automating the detection and analysis of brain injuries in IH patients. However, the “black‐box” nature of many AI systems raises concerns about transparency, reliability, and clinical applicability. Explainable AI (XAI) addresses these challenges by making AI models more interpretable, allowing healthcare professionals to understand and trust the decision‐making processes. This review paper explores various XAI techniques—such as SHapley Additive exPlanations (SHAP), Local Interpretable Model‐Agnostic Explanations (LIME), Randomized Input Sampling for Explanation (RISE), Class Activation Mapping (CAM), and its variants—and their specific applications in IH clinical tasks. We systematically examine studies incorporating XAI for curing IH patients, highlighting how these methods enhance model transparency and support clinical decision‐making. The Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) methodology was employed to select the papers. Studies are categorized into those using tabular data and those using image data. The literature indicates a rapidly growing number of XAI publications in this field. SHAP is the most commonly used XAI method for tabular data, while CAM‐based methods, such as Grad‐CAM, dominate in image‐based applications. Furthermore, we discuss current limitations of XAI methods and future research directions. This review aims to provide researchers and clinicians with valuable insights into the role of XAI in improving the reliability and practical integration of AI‐driven tools for IH patient care.This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item>Application Areas > Health Care</jats:list-item> <jats:list-item>Fundamental Concepts of Data and Knowledge > Explainable AI</jats:list-item> <jats:list-item>Technologies > Machine Learning</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"48 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"WIREs Data Mining and Knowledge Discovery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/widm.70031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Intracranial hemorrhage (IH) is a critical condition requiring rapid and accurate diagnosis to ensure effective treatment and reduce mortality rates. Recently, artificial intelligence (AI) models have demonstrated significant potential in automating the detection and analysis of brain injuries in IH patients. However, the “black‐box” nature of many AI systems raises concerns about transparency, reliability, and clinical applicability. Explainable AI (XAI) addresses these challenges by making AI models more interpretable, allowing healthcare professionals to understand and trust the decision‐making processes. This review paper explores various XAI techniques—such as SHapley Additive exPlanations (SHAP), Local Interpretable Model‐Agnostic Explanations (LIME), Randomized Input Sampling for Explanation (RISE), Class Activation Mapping (CAM), and its variants—and their specific applications in IH clinical tasks. We systematically examine studies incorporating XAI for curing IH patients, highlighting how these methods enhance model transparency and support clinical decision‐making. The Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) methodology was employed to select the papers. Studies are categorized into those using tabular data and those using image data. The literature indicates a rapidly growing number of XAI publications in this field. SHAP is the most commonly used XAI method for tabular data, while CAM‐based methods, such as Grad‐CAM, dominate in image‐based applications. Furthermore, we discuss current limitations of XAI methods and future research directions. This review aims to provide researchers and clinicians with valuable insights into the role of XAI in improving the reliability and practical integration of AI‐driven tools for IH patient care.This article is categorized under: Application Areas > Health CareFundamental Concepts of Data and Knowledge > Explainable AITechnologies > Machine Learning