Explainable deep learning in healthcare: A methodological survey from an attribution view.

IF 4.6 3区 医学 Q2 MEDICINE, RESEARCH & EXPERIMENTAL
WIREs Mechanisms of Disease Pub Date : 2022-05-01 Epub Date: 2022-01-17 DOI:10.1002/wsbm.1548
Di Jin, Elena Sergeeva, Wei-Hung Weng, Geeticka Chauhan, Peter Szolovits
{"title":"Explainable deep learning in healthcare: A methodological survey from an attribution view.","authors":"Di Jin,&nbsp;Elena Sergeeva,&nbsp;Wei-Hung Weng,&nbsp;Geeticka Chauhan,&nbsp;Peter Szolovits","doi":"10.1002/wsbm.1548","DOIUrl":null,"url":null,"abstract":"<p><p>The increasing availability of large collections of electronic health record (EHR) data and unprecedented technical advances in deep learning (DL) have sparked a surge of research interest in developing DL based clinical decision support systems for diagnosis, prognosis, and treatment. Despite the recognition of the value of deep learning in healthcare, impediments to further adoption in real healthcare settings remain due to the black-box nature of DL. Therefore, there is an emerging need for interpretable DL, which allows end users to evaluate the model decision making to know whether to accept or reject predictions and recommendations before an action is taken. In this review, we focus on the interpretability of the DL models in healthcare. We start by introducing the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners in this field. Besides the methods' details, we also include a discussion of advantages and disadvantages of these methods and which scenarios each of them is suitable for, so that interested readers can know how to compare and choose among them for use. Moreover, we discuss how these methods, originally developed for solving general-domain problems, have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies. Overall, we hope this survey can help researchers and practitioners in both artificial intelligence and clinical fields understand what methods we have for enhancing the interpretability of their DL models and choose the optimal one accordingly. This article is categorized under: Cancer > Computational Models.</p>","PeriodicalId":29896,"journal":{"name":"WIREs Mechanisms of Disease","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"WIREs Mechanisms of Disease","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/wsbm.1548","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/17 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 17

Abstract

The increasing availability of large collections of electronic health record (EHR) data and unprecedented technical advances in deep learning (DL) have sparked a surge of research interest in developing DL based clinical decision support systems for diagnosis, prognosis, and treatment. Despite the recognition of the value of deep learning in healthcare, impediments to further adoption in real healthcare settings remain due to the black-box nature of DL. Therefore, there is an emerging need for interpretable DL, which allows end users to evaluate the model decision making to know whether to accept or reject predictions and recommendations before an action is taken. In this review, we focus on the interpretability of the DL models in healthcare. We start by introducing the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners in this field. Besides the methods' details, we also include a discussion of advantages and disadvantages of these methods and which scenarios each of them is suitable for, so that interested readers can know how to compare and choose among them for use. Moreover, we discuss how these methods, originally developed for solving general-domain problems, have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies. Overall, we hope this survey can help researchers and practitioners in both artificial intelligence and clinical fields understand what methods we have for enhancing the interpretability of their DL models and choose the optimal one accordingly. This article is categorized under: Cancer > Computational Models.

医疗保健中可解释的深度学习:归因视角下的方法学调查。
电子健康记录(EHR)数据的大量收集和深度学习(DL)前所未有的技术进步,引发了开发基于深度学习的诊断、预后和治疗临床决策支持系统的研究兴趣激增。尽管人们认识到深度学习在医疗保健中的价值,但由于深度学习的黑箱性质,在实际医疗保健环境中进一步采用深度学习仍然存在障碍。因此,出现了对可解释深度学习的需求,它允许最终用户评估模型决策制定,以便在采取行动之前了解是否接受或拒绝预测和建议。在这篇综述中,我们关注的是医疗保健中深度学习模型的可解释性。首先,我们将深入全面地介绍可解释性的研究方法,为未来的研究人员或临床实践者提供方法上的参考。除了这些方法的细节外,我们还讨论了这些方法的优缺点以及每种方法适合哪些场景,以便有兴趣的读者了解如何比较和选择使用它们。此外,我们还讨论了这些最初为解决一般领域问题而开发的方法如何被适应并应用于医疗保健问题,以及它们如何帮助医生更好地理解这些数据驱动的技术。总的来说,我们希望这项调查可以帮助人工智能和临床领域的研究人员和从业者了解我们有哪些方法可以提高他们的深度学习模型的可解释性,并相应地选择最佳的方法。本文分类如下:癌症>计算模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
WIREs Mechanisms of Disease
WIREs Mechanisms of Disease MEDICINE, RESEARCH & EXPERIMENTAL-
CiteScore
11.40
自引率
0.00%
发文量
45
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信