Mehmet Akif Gulum, Christopher M. Trombley, M. Ozen, M. Kantardzic
{"title":"Are Post-Hoc Explanation Methods for Prostate Lesion Detection Effective for Radiology End Use?","authors":"Mehmet Akif Gulum, Christopher M. Trombley, M. Ozen, M. Kantardzic","doi":"10.1109/ICMLA55696.2022.00191","DOIUrl":null,"url":null,"abstract":"Deep learning has demonstrated impressive performance for medical tasks such as cancer classification and lesion detection. While it has achieved impressive performance, it is a black-box algorithm and therefore is difficult to interpret. Interpretation is especially important in fields that are high-risk in nature such as the medical field. There recently has been various methods proposed to interpret deep learning algorithms. However, there are limited studies evaluating these explanation methods in clinical settings such as radiology. To that end, we conduct a pilot study that evaluates the effectiveness of explanation methods for radiology end use. We evaluate if explanation methods improve diagnosis performance and what method is preferred by radiologists. We also glean insight into what characteristics radiologists deem explainable. We found that explanation methods increase diagnosis performance however it is dependent on the individual method. We also find that the radiology cohort deem the themes insight, visualization, and accuracy to be the most sought after explainable characteristics. The insights garnered in this study have the potential to guide future developments and studies of explanation methods for clinical use.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00191","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has demonstrated impressive performance for medical tasks such as cancer classification and lesion detection. While it has achieved impressive performance, it is a black-box algorithm and therefore is difficult to interpret. Interpretation is especially important in fields that are high-risk in nature such as the medical field. There recently has been various methods proposed to interpret deep learning algorithms. However, there are limited studies evaluating these explanation methods in clinical settings such as radiology. To that end, we conduct a pilot study that evaluates the effectiveness of explanation methods for radiology end use. We evaluate if explanation methods improve diagnosis performance and what method is preferred by radiologists. We also glean insight into what characteristics radiologists deem explainable. We found that explanation methods increase diagnosis performance however it is dependent on the individual method. We also find that the radiology cohort deem the themes insight, visualization, and accuracy to be the most sought after explainable characteristics. The insights garnered in this study have the potential to guide future developments and studies of explanation methods for clinical use.