José P. Amorim;Pedro H. Abreu;Alberto Fernández;Mauricio Reyes;João Santos;Miguel H. Abreu
{"title":"解读深度机器学习模型:肿瘤学家的简单指南","authors":"José P. Amorim;Pedro H. Abreu;Alberto Fernández;Mauricio Reyes;João Santos;Miguel H. Abreu","doi":"10.1109/RBME.2021.3131358","DOIUrl":null,"url":null,"abstract":"Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using “deep learning techniques,” “interpretability” and “oncology” as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.","PeriodicalId":39235,"journal":{"name":"IEEE Reviews in Biomedical Engineering","volume":"16 ","pages":"192-207"},"PeriodicalIF":17.2000,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists\",\"authors\":\"José P. Amorim;Pedro H. Abreu;Alberto Fernández;Mauricio Reyes;João Santos;Miguel H. Abreu\",\"doi\":\"10.1109/RBME.2021.3131358\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using “deep learning techniques,” “interpretability” and “oncology” as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.\",\"PeriodicalId\":39235,\"journal\":{\"name\":\"IEEE Reviews in Biomedical Engineering\",\"volume\":\"16 \",\"pages\":\"192-207\"},\"PeriodicalIF\":17.2000,\"publicationDate\":\"2021-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Reviews in Biomedical Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/9629296/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Reviews in Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/9629296/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using “deep learning techniques,” “interpretability” and “oncology” as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
期刊介绍:
IEEE Reviews in Biomedical Engineering (RBME) serves as a platform to review the state-of-the-art and trends in the interdisciplinary field of biomedical engineering, which encompasses engineering, life sciences, and medicine. The journal aims to consolidate research and reviews for members of all IEEE societies interested in biomedical engineering. Recognizing the demand for comprehensive reviews among authors of various IEEE journals, RBME addresses this need by receiving, reviewing, and publishing scholarly works under one umbrella. It covers a broad spectrum, from historical to modern developments in biomedical engineering and the integration of technologies from various IEEE societies into the life sciences and medicine.