用于多器官胸片分割的全连接可复制SE-UResNet

Debojyoti Pal, Tanushree Meena, S. Roy
{"title":"用于多器官胸片分割的全连接可复制SE-UResNet","authors":"Debojyoti Pal, Tanushree Meena, S. Roy","doi":"10.1109/IRI58017.2023.00052","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) models are a popular choice for resolving intricate issues in medical imaging, such as the classification of diseases, detection of anomalies, and segmentation of tissues in real-world scenarios. To be useful in these contexts, the models must be able to provide accurate results for new, previously untrained data. Existing methods not only fail to consider the intrinsic features of small target lesions but are also not evaluated on separate datasets. To solve these problems we propose a novel architecture, SE-UResNet, capable of segmenting multiple organs having different size and shapes from Chest X-Ray (CXR) images. The proposed architecture introduces a residual module in between the encoding and decoding modules of an attention U-Net architecture for better feature representation of high-level features. The architecture also replaces the attention gates in the decoder module of attention U-Net with Squeeze and Excite (S&E) modules. SE-UResNet is experimented on benchmark CXR datasets such as NIH CXR for lungs, heart, trachea and collarbone segmentation as well as VinDr-RibCXR for ribs segmentation tasks with respect to other state-of-the-art segmentation models. The proposed model achieves an average DSC of 95.9%, 76.8%, 78.7%, 78.8%, and 86.0% for lungs, trachea, heart, collarbone and ribs segmentation for the aforementioned datasets. Furthermore, the proposed model has only been tested on two benchmark CXR datasets: Shenzen and JSRT to establish the reproducibility and robustness of the model. The performance of SE-UResNet on several benchmark CXR datasets demonstrates the model’s ability to generalize, making it a reliable baseline for medical image segmentation. Furthermore, it can also be used for assessing the reproducibility of DL models based on their performance on different datasets.","PeriodicalId":290818,"journal":{"name":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Fully Connected Reproducible SE-UResNet for Multiorgan Chest Radiographs Segmentation\",\"authors\":\"Debojyoti Pal, Tanushree Meena, S. Roy\",\"doi\":\"10.1109/IRI58017.2023.00052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning (DL) models are a popular choice for resolving intricate issues in medical imaging, such as the classification of diseases, detection of anomalies, and segmentation of tissues in real-world scenarios. To be useful in these contexts, the models must be able to provide accurate results for new, previously untrained data. Existing methods not only fail to consider the intrinsic features of small target lesions but are also not evaluated on separate datasets. To solve these problems we propose a novel architecture, SE-UResNet, capable of segmenting multiple organs having different size and shapes from Chest X-Ray (CXR) images. The proposed architecture introduces a residual module in between the encoding and decoding modules of an attention U-Net architecture for better feature representation of high-level features. The architecture also replaces the attention gates in the decoder module of attention U-Net with Squeeze and Excite (S&E) modules. SE-UResNet is experimented on benchmark CXR datasets such as NIH CXR for lungs, heart, trachea and collarbone segmentation as well as VinDr-RibCXR for ribs segmentation tasks with respect to other state-of-the-art segmentation models. The proposed model achieves an average DSC of 95.9%, 76.8%, 78.7%, 78.8%, and 86.0% for lungs, trachea, heart, collarbone and ribs segmentation for the aforementioned datasets. Furthermore, the proposed model has only been tested on two benchmark CXR datasets: Shenzen and JSRT to establish the reproducibility and robustness of the model. The performance of SE-UResNet on several benchmark CXR datasets demonstrates the model’s ability to generalize, making it a reliable baseline for medical image segmentation. Furthermore, it can also be used for assessing the reproducibility of DL models based on their performance on different datasets.\",\"PeriodicalId\":290818,\"journal\":{\"name\":\"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IRI58017.2023.00052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRI58017.2023.00052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习(DL)模型是解决医学成像中复杂问题的热门选择,例如疾病分类、异常检测和现实场景中的组织分割。为了在这些环境中发挥作用,模型必须能够为新的、以前未经训练的数据提供准确的结果。现有的方法不仅没有考虑小目标病变的内在特征,而且没有在单独的数据集上进行评估。为了解决这些问题,我们提出了一种新的架构SE-UResNet,能够从胸部x射线(CXR)图像中分割不同大小和形状的多个器官。该架构在注意力U-Net架构的编码和解码模块之间引入残差模块,以更好地表示高级特征。该架构还将注意力U-Net解码器模块中的注意力门替换为挤压和激发(S&E)模块。SE-UResNet在基准CXR数据集上进行了实验,例如用于肺、心脏、气管和锁骨分割的NIH CXR,以及用于肋骨分割任务的vind - ribcxr,以及其他最先进的分割模型。该模型对上述数据集的肺、气管、心脏、锁骨和肋骨分割的平均DSC分别为95.9%、76.8%、78.7%、78.8%和86.0%。此外,该模型仅在深圳和JSRT两个基准CXR数据集上进行了测试,以建立模型的再现性和鲁棒性。SE-UResNet在几个基准CXR数据集上的性能证明了该模型的泛化能力,使其成为医学图像分割的可靠基线。此外,它还可以用于评估基于DL模型在不同数据集上的性能的再现性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Fully Connected Reproducible SE-UResNet for Multiorgan Chest Radiographs Segmentation
Deep learning (DL) models are a popular choice for resolving intricate issues in medical imaging, such as the classification of diseases, detection of anomalies, and segmentation of tissues in real-world scenarios. To be useful in these contexts, the models must be able to provide accurate results for new, previously untrained data. Existing methods not only fail to consider the intrinsic features of small target lesions but are also not evaluated on separate datasets. To solve these problems we propose a novel architecture, SE-UResNet, capable of segmenting multiple organs having different size and shapes from Chest X-Ray (CXR) images. The proposed architecture introduces a residual module in between the encoding and decoding modules of an attention U-Net architecture for better feature representation of high-level features. The architecture also replaces the attention gates in the decoder module of attention U-Net with Squeeze and Excite (S&E) modules. SE-UResNet is experimented on benchmark CXR datasets such as NIH CXR for lungs, heart, trachea and collarbone segmentation as well as VinDr-RibCXR for ribs segmentation tasks with respect to other state-of-the-art segmentation models. The proposed model achieves an average DSC of 95.9%, 76.8%, 78.7%, 78.8%, and 86.0% for lungs, trachea, heart, collarbone and ribs segmentation for the aforementioned datasets. Furthermore, the proposed model has only been tested on two benchmark CXR datasets: Shenzen and JSRT to establish the reproducibility and robustness of the model. The performance of SE-UResNet on several benchmark CXR datasets demonstrates the model’s ability to generalize, making it a reliable baseline for medical image segmentation. Furthermore, it can also be used for assessing the reproducibility of DL models based on their performance on different datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信