胸部x线图像分类深度学习模型的局部化研究

James Gascoigne-Burns, Stamos Katsigiannis
{"title":"胸部x线图像分类深度学习模型的局部化研究","authors":"James Gascoigne-Burns, Stamos Katsigiannis","doi":"10.1109/BHI56158.2022.9926904","DOIUrl":null,"url":null,"abstract":"Deep learning models have demonstrated superhuman performance in a multitude of image classification tasks, including the classification of chest X-ray images. Despite this, medical professionals are reluctant to embrace these models in clinical settings due to a lack of interpretability, citing being able to visualise the image areas contributing most to a model's predictions as one of the best ways to establish trust. To aid the discussion of their suitability for real-world use, in this work, we attempt to address this issue by conducting a localisation study of two state-of-the-art deep learning models for chest X-ray image classification, ResNet-38-large-meta and CheXNet, on a set of 984 radiologist annotated X-ray images from the publicly available ChestX-ray14 dataset. We do this by applying and comparing several state-of-the-art visualisation methods, combined with a novel dynamic thresholding approach for generating bounding boxes, which we show to outperform the static thresholding method used by similar localisation studies in the literature. Results also seem to indicate that localisation quality is more sensitive to the choice of thresholding scheme than the visualisation method used, and that a high discriminative ability as measured by classification performance is not necessarily sufficient for models to produce useful and accurate localisations.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Localisation Study of Deep Learning Models for Chest X-ray Image Classification\",\"authors\":\"James Gascoigne-Burns, Stamos Katsigiannis\",\"doi\":\"10.1109/BHI56158.2022.9926904\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models have demonstrated superhuman performance in a multitude of image classification tasks, including the classification of chest X-ray images. Despite this, medical professionals are reluctant to embrace these models in clinical settings due to a lack of interpretability, citing being able to visualise the image areas contributing most to a model's predictions as one of the best ways to establish trust. To aid the discussion of their suitability for real-world use, in this work, we attempt to address this issue by conducting a localisation study of two state-of-the-art deep learning models for chest X-ray image classification, ResNet-38-large-meta and CheXNet, on a set of 984 radiologist annotated X-ray images from the publicly available ChestX-ray14 dataset. We do this by applying and comparing several state-of-the-art visualisation methods, combined with a novel dynamic thresholding approach for generating bounding boxes, which we show to outperform the static thresholding method used by similar localisation studies in the literature. Results also seem to indicate that localisation quality is more sensitive to the choice of thresholding scheme than the visualisation method used, and that a high discriminative ability as measured by classification performance is not necessarily sufficient for models to produce useful and accurate localisations.\",\"PeriodicalId\":347210,\"journal\":{\"name\":\"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BHI56158.2022.9926904\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BHI56158.2022.9926904","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习模型在许多图像分类任务中表现出了超人的性能,包括胸部x射线图像的分类。尽管如此,由于缺乏可解释性,医学专业人员不愿意在临床环境中采用这些模型,理由是能够可视化对模型预测贡献最大的图像区域,这是建立信任的最佳方式之一。为了帮助讨论它们在现实世界中的适用性,在这项工作中,我们试图通过对两种最先进的胸部x射线图像分类深度学习模型resnet -38-大元模型和CheXNet进行本地化研究来解决这个问题,这些模型来自公开可用的ChestX-ray14数据集,包含984张放射科医生注释的x射线图像。我们通过应用和比较几种最先进的可视化方法,结合一种用于生成边界框的新型动态阈值方法来实现这一点,我们证明该方法优于文献中类似定位研究使用的静态阈值方法。结果似乎还表明,定位质量对阈值方案的选择比所使用的可视化方法更敏感,并且通过分类性能衡量的高判别能力并不一定足以使模型产生有用和准确的定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Localisation Study of Deep Learning Models for Chest X-ray Image Classification
Deep learning models have demonstrated superhuman performance in a multitude of image classification tasks, including the classification of chest X-ray images. Despite this, medical professionals are reluctant to embrace these models in clinical settings due to a lack of interpretability, citing being able to visualise the image areas contributing most to a model's predictions as one of the best ways to establish trust. To aid the discussion of their suitability for real-world use, in this work, we attempt to address this issue by conducting a localisation study of two state-of-the-art deep learning models for chest X-ray image classification, ResNet-38-large-meta and CheXNet, on a set of 984 radiologist annotated X-ray images from the publicly available ChestX-ray14 dataset. We do this by applying and comparing several state-of-the-art visualisation methods, combined with a novel dynamic thresholding approach for generating bounding boxes, which we show to outperform the static thresholding method used by similar localisation studies in the literature. Results also seem to indicate that localisation quality is more sensitive to the choice of thresholding scheme than the visualisation method used, and that a high discriminative ability as measured by classification performance is not necessarily sufficient for models to produce useful and accurate localisations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信