Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists

IF 17.2 1区 工程技术 Q1 ENGINEERING, BIOMEDICAL
José P. Amorim;Pedro H. Abreu;Alberto Fernández;Mauricio Reyes;João Santos;Miguel H. Abreu
{"title":"Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists","authors":"José P. Amorim;Pedro H. Abreu;Alberto Fernández;Mauricio Reyes;João Santos;Miguel H. Abreu","doi":"10.1109/RBME.2021.3131358","DOIUrl":null,"url":null,"abstract":"Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using “deep learning techniques,” “interpretability” and “oncology” as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.","PeriodicalId":39235,"journal":{"name":"IEEE Reviews in Biomedical Engineering","volume":"16 ","pages":"192-207"},"PeriodicalIF":17.2000,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Reviews in Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/9629296/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 8

Abstract

Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using “deep learning techniques,” “interpretability” and “oncology” as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
解读深度机器学习模型:肿瘤学家的简单指南
医疗保健代理人,特别是肿瘤学领域的代理人,目前正在收集大量不同的患者数据。在这种情况下,一些主要基于深度学习技术的决策支持系统已经被批准用于临床目的。尽管在临床医生的工作流程中引入了人工智能方法,但其缺乏可解释性——理解这些方法是如何做出决策的——仍然阻碍了它们在临床实践中的传播。本文的目的是为肿瘤学家提供一个简单的指南,解释这些方法是如何做出决定的,并说明解释它们的策略。理论概念是基于肿瘤学的例子进行说明的,2014年1月至2020年9月,PubMed以“深度学习技术”、“可解释性”和“肿瘤学”为关键词对研究工作进行了文献综述。总体而言,超过60%与乳腺癌、皮肤癌或脑癌有关,大多数人专注于解释肿瘤特征(如尺寸、形状)在预测中的重要性。最常用的计算方法是多层感知器和卷积神经网络。尽管如此,尽管深度学习技术已成功应用于不同的癌症场景,但在保持其性能的同时,赋予其可解释性仍然是人工智能的最大挑战之一。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Reviews in Biomedical Engineering
IEEE Reviews in Biomedical Engineering Engineering-Biomedical Engineering
CiteScore
31.70
自引率
0.60%
发文量
93
期刊介绍: IEEE Reviews in Biomedical Engineering (RBME) serves as a platform to review the state-of-the-art and trends in the interdisciplinary field of biomedical engineering, which encompasses engineering, life sciences, and medicine. The journal aims to consolidate research and reviews for members of all IEEE societies interested in biomedical engineering. Recognizing the demand for comprehensive reviews among authors of various IEEE journals, RBME addresses this need by receiving, reviewing, and publishing scholarly works under one umbrella. It covers a broad spectrum, from historical to modern developments in biomedical engineering and the integration of technologies from various IEEE societies into the life sciences and medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信