Abdul Aziz Noor, Awais Manzoor, Muhammad Deedahwar Mazhar Qureshi, M. Atif Qureshi, Wael Rashwan
{"title":"揭示医疗保健领域可解释的人工智能:当前趋势、挑战和未来方向","authors":"Abdul Aziz Noor, Awais Manzoor, Muhammad Deedahwar Mazhar Qureshi, M. Atif Qureshi, Wael Rashwan","doi":"10.1002/widm.70018","DOIUrl":null,"url":null,"abstract":"This overview investigates the evolution and current landscape of eXplainable Artificial Intelligence (XAI) in healthcare, highlighting its implications for researchers, technology developers, and policymakers. Following the PRISMA protocol, we analyzed 89 publications from January 2000 to June 2024, spanning 19 medical domains, with a focus on Neurology and Cancer as the most studied areas. Various data types are reviewed, including tabular data, medical imaging, and clinical text, offering a comprehensive perspective on XAI applications. Key findings identify significant gaps, such as the limited availability of public datasets, suboptimal data preprocessing techniques, insufficient feature selection and engineering, and the limited utilization of multiple XAI methods. Additionally, the lack of standardized XAI evaluation metrics and practical obstacles in integrating XAI systems into clinical workflows are emphasized. We provide actionable recommendations, including the design of explainability‐centric models, the application of diverse and multiple XAI methods, and the fostering of interdisciplinary collaboration. These strategies aim to guide researchers in building robust AI models, assist technology developers in creating intuitive and user‐friendly AI tools, and inform policymakers in establishing effective regulations. Addressing these gaps will promote the development of transparent, reliable, and user‐centred AI systems in healthcare, ultimately improving decision‐making and patient outcomes.","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unveiling Explainable AI in Healthcare: Current Trends, Challenges, and Future Directions\",\"authors\":\"Abdul Aziz Noor, Awais Manzoor, Muhammad Deedahwar Mazhar Qureshi, M. Atif Qureshi, Wael Rashwan\",\"doi\":\"10.1002/widm.70018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This overview investigates the evolution and current landscape of eXplainable Artificial Intelligence (XAI) in healthcare, highlighting its implications for researchers, technology developers, and policymakers. Following the PRISMA protocol, we analyzed 89 publications from January 2000 to June 2024, spanning 19 medical domains, with a focus on Neurology and Cancer as the most studied areas. Various data types are reviewed, including tabular data, medical imaging, and clinical text, offering a comprehensive perspective on XAI applications. Key findings identify significant gaps, such as the limited availability of public datasets, suboptimal data preprocessing techniques, insufficient feature selection and engineering, and the limited utilization of multiple XAI methods. Additionally, the lack of standardized XAI evaluation metrics and practical obstacles in integrating XAI systems into clinical workflows are emphasized. We provide actionable recommendations, including the design of explainability‐centric models, the application of diverse and multiple XAI methods, and the fostering of interdisciplinary collaboration. These strategies aim to guide researchers in building robust AI models, assist technology developers in creating intuitive and user‐friendly AI tools, and inform policymakers in establishing effective regulations. Addressing these gaps will promote the development of transparent, reliable, and user‐centred AI systems in healthcare, ultimately improving decision‐making and patient outcomes.\",\"PeriodicalId\":501013,\"journal\":{\"name\":\"WIREs Data Mining and Knowledge Discovery\",\"volume\":\"38 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"WIREs Data Mining and Knowledge Discovery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/widm.70018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"WIREs Data Mining and Knowledge Discovery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/widm.70018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unveiling Explainable AI in Healthcare: Current Trends, Challenges, and Future Directions
This overview investigates the evolution and current landscape of eXplainable Artificial Intelligence (XAI) in healthcare, highlighting its implications for researchers, technology developers, and policymakers. Following the PRISMA protocol, we analyzed 89 publications from January 2000 to June 2024, spanning 19 medical domains, with a focus on Neurology and Cancer as the most studied areas. Various data types are reviewed, including tabular data, medical imaging, and clinical text, offering a comprehensive perspective on XAI applications. Key findings identify significant gaps, such as the limited availability of public datasets, suboptimal data preprocessing techniques, insufficient feature selection and engineering, and the limited utilization of multiple XAI methods. Additionally, the lack of standardized XAI evaluation metrics and practical obstacles in integrating XAI systems into clinical workflows are emphasized. We provide actionable recommendations, including the design of explainability‐centric models, the application of diverse and multiple XAI methods, and the fostering of interdisciplinary collaboration. These strategies aim to guide researchers in building robust AI models, assist technology developers in creating intuitive and user‐friendly AI tools, and inform policymakers in establishing effective regulations. Addressing these gaps will promote the development of transparent, reliable, and user‐centred AI systems in healthcare, ultimately improving decision‐making and patient outcomes.