Counterfactual Explainable Gastrointestinal and Colonoscopy Image Segmentation

Divij G. Singh, Ayush Somani, A. Horsch, Dilip K. Prasad
{"title":"Counterfactual Explainable Gastrointestinal and Colonoscopy Image Segmentation","authors":"Divij G. Singh, Ayush Somani, A. Horsch, Dilip K. Prasad","doi":"10.1109/ISBI52829.2022.9761664","DOIUrl":null,"url":null,"abstract":"Segmenting medical images accurately and reliably is crucial for disease diagnosis and treatment. Due to the wide assortment of objects’ sizes, shapes, and scanning modalities, it has become more challenging. Many convolutional neural networks (CNN) have recently been designed for segmentation tasks and achieved great success. This paper presents an optimized deep learning solution using DeepLabv3+ with ResNet-101 as its backbone. The proposed approach allows capturing variabilities of diverse objects. It provides improved and reliable quantitative and qualitative results in comparison to other state-of-the-art (SOTA) methods on two publicly available gastrointestinal and colonoscopy datasets. Few studies show the inadequacy of stable performance in varying object segmentation tasks, notwithstanding the sizes of objects. Our method has stable performance in the segmentation of large and small medical objects. The explainability of our robust model with benchmarking on SOTA approaches for both datasets will be fruitful for further research on biomedical image segmentation.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"210 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI52829.2022.9761664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Segmenting medical images accurately and reliably is crucial for disease diagnosis and treatment. Due to the wide assortment of objects’ sizes, shapes, and scanning modalities, it has become more challenging. Many convolutional neural networks (CNN) have recently been designed for segmentation tasks and achieved great success. This paper presents an optimized deep learning solution using DeepLabv3+ with ResNet-101 as its backbone. The proposed approach allows capturing variabilities of diverse objects. It provides improved and reliable quantitative and qualitative results in comparison to other state-of-the-art (SOTA) methods on two publicly available gastrointestinal and colonoscopy datasets. Few studies show the inadequacy of stable performance in varying object segmentation tasks, notwithstanding the sizes of objects. Our method has stable performance in the segmentation of large and small medical objects. The explainability of our robust model with benchmarking on SOTA approaches for both datasets will be fruitful for further research on biomedical image segmentation.
反事实的可解释胃肠道和结肠镜图像分割
准确、可靠地分割医学图像对于疾病的诊断和治疗至关重要。由于对象的大小、形状和扫描方式的广泛分类,它变得更具挑战性。最近,许多卷积神经网络(CNN)被设计用于分割任务,并取得了巨大的成功。本文提出了一种基于DeepLabv3+和ResNet-101的优化深度学习解决方案。所提出的方法允许捕获不同对象的可变性。与其他最先进的(SOTA)方法相比,它在两个公开的胃肠道和结肠镜检查数据集上提供了改进和可靠的定量和定性结果。很少有研究表明,在不同的目标分割任务中,尽管目标的大小不同,但稳定的性能仍然不足。该方法在大大小小的医疗对象分割中具有稳定的性能。我们对两个数据集的SOTA方法进行基准测试的鲁棒模型的可解释性将为进一步研究生物医学图像分割带来成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信