Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review

IF 17.2 1区 工程技术 Q1 ENGINEERING, BIOMEDICAL
Felipe Giuste;Wenqi Shi;Yuanda Zhu;Tarun Naren;Monica Isgut;Ying Sha;Li Tong;Mitali Gupte;May D. Wang
{"title":"Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review","authors":"Felipe Giuste;Wenqi Shi;Yuanda Zhu;Tarun Naren;Monica Isgut;Ying Sha;Li Tong;Mitali Gupte;May D. Wang","doi":"10.1109/RBME.2022.3185953","DOIUrl":null,"url":null,"abstract":"Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made a significant clinical impact, especially in diagnosis and disease precision staging. One major cause for such low impact is the lack of model transparency, significantly limiting the AI adoption in real clinical practice. To solve this problem, AI models need to be explained to users. Thus, we have conducted a comprehensive study of Explainable Artificial Intelligence (XAI) using PRISMA technology. Our findings suggest that XAI can improve model performance, instill trust in the users, and assist users in decision-making. In this systematic review, we introduce common XAI techniques and their utility with specific examples of their application. We discuss the evaluation of XAI results because it is an important step for maximizing the value of AI-based clinical decision support systems. Additionally, we present the traditional, modern, and advanced XAI models to demonstrate the evolution of novel techniques. Finally, we provide a best practice guideline that developers can refer to during the model experimentation. We also offer potential solutions with specific examples for common challenges in AI model experimentation. This comprehensive review, hopefully, can promote AI adoption in biomedicine and healthcare.","PeriodicalId":39235,"journal":{"name":"IEEE Reviews in Biomedical Engineering","volume":"16 ","pages":"5-21"},"PeriodicalIF":17.2000,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/4664312/10007429/09804787.pdf","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Reviews in Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/9804787/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 30

Abstract

Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made a significant clinical impact, especially in diagnosis and disease precision staging. One major cause for such low impact is the lack of model transparency, significantly limiting the AI adoption in real clinical practice. To solve this problem, AI models need to be explained to users. Thus, we have conducted a comprehensive study of Explainable Artificial Intelligence (XAI) using PRISMA technology. Our findings suggest that XAI can improve model performance, instill trust in the users, and assist users in decision-making. In this systematic review, we introduce common XAI techniques and their utility with specific examples of their application. We discuss the evaluation of XAI results because it is an important step for maximizing the value of AI-based clinical decision support systems. Additionally, we present the traditional, modern, and advanced XAI models to demonstrate the evolution of novel techniques. Finally, we provide a best practice guideline that developers can refer to during the model experimentation. We also offer potential solutions with specific examples for common challenges in AI model experimentation. This comprehensive review, hopefully, can promote AI adoption in biomedicine and healthcare.
应对流行病的可解释人工智能方法:系统综述
尽管有无数同行评议的论文展示了基于人工智能(AI)的新型解决方案,以应对疫情期间新冠肺炎的挑战,但很少有论文产生重大临床影响,尤其是在诊断和疾病精确分期方面。影响如此之低的一个主要原因是缺乏模型透明度,这大大限制了人工智能在实际临床实践中的应用。为了解决这个问题,人工智能模型需要向用户解释。因此,我们使用PRISMA技术对可解释人工智能(XAI)进行了全面的研究。我们的研究结果表明,XAI可以提高模型性能,向用户灌输信任,并帮助用户做出决策。在这篇系统综述中,我们介绍了常见的XAI技术及其实用性,并举例说明了它们的应用。我们讨论XAI结果的评估,因为这是实现基于人工智能的临床决策支持系统价值最大化的重要一步。此外,我们还介绍了传统、现代和先进的XAI模型,以展示新技术的演变。最后,我们提供了一个最佳实践指南,开发人员可以在模型实验期间参考。我们还为人工智能模型实验中的常见挑战提供了潜在的解决方案和具体的例子。这篇全面的综述有望推动人工智能在生物医学和医疗保健领域的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Reviews in Biomedical Engineering
IEEE Reviews in Biomedical Engineering Engineering-Biomedical Engineering
CiteScore
31.70
自引率
0.60%
发文量
93
期刊介绍: IEEE Reviews in Biomedical Engineering (RBME) serves as a platform to review the state-of-the-art and trends in the interdisciplinary field of biomedical engineering, which encompasses engineering, life sciences, and medicine. The journal aims to consolidate research and reviews for members of all IEEE societies interested in biomedical engineering. Recognizing the demand for comprehensive reviews among authors of various IEEE journals, RBME addresses this need by receiving, reviewing, and publishing scholarly works under one umbrella. It covers a broad spectrum, from historical to modern developments in biomedical engineering and the integration of technologies from various IEEE societies into the life sciences and medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信