Upernet-Based Deep Learning Method For The Segmentation Of Gastrointestinal Tract Images

Yang Qiu
{"title":"Upernet-Based Deep Learning Method For The Segmentation Of Gastrointestinal Tract Images","authors":"Yang Qiu","doi":"10.1145/3599589.3599595","DOIUrl":null,"url":null,"abstract":"When giving radiation therapy to patients with gastrointestinal cancers, radiation oncologists must manually outline the locations of the stomach and intestines in order to adjust the direction of the X-ray beam. This process can increase the dose delivered to the tumor while avoiding the stomach and intestines, but is time-consuming and labor-intensive. Therefore, the development of automated segmentation methods for gastrointestinal tract images will enable faster and more effective treatment for patients. For that purpose, we propose an UPerNet-based deep learning approach in this paper, to segment the stomach, small bowel, and large bowel in gastrointestinal tract images with excellent performance. The dataset in this work is from the UW-Madison GI Tract Image Segmentation Kaggle competition. The input images are obtained by applying a 2.5D preprocessing method on this dataset. We choose the EfficientNet-B4 and Swin Transformer (base) as the backbones of the UPerNet architecture separately. An average ensemble of these two models is subsequently implemented to boost the model performance. After applying the K-Fold cross validation, our method reaches a competition score 0.86827 on the private test set. With this performance, our team locates at the 135th place among 1548 teams and gets a bronze medal in the Kaggle competition. This work would accelerate the development of auxiliary systems for the segmentation of gastrointestinal tract images, and could potentially contribute to the research of generalized segmentation methods for medical images.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3599589.3599595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

When giving radiation therapy to patients with gastrointestinal cancers, radiation oncologists must manually outline the locations of the stomach and intestines in order to adjust the direction of the X-ray beam. This process can increase the dose delivered to the tumor while avoiding the stomach and intestines, but is time-consuming and labor-intensive. Therefore, the development of automated segmentation methods for gastrointestinal tract images will enable faster and more effective treatment for patients. For that purpose, we propose an UPerNet-based deep learning approach in this paper, to segment the stomach, small bowel, and large bowel in gastrointestinal tract images with excellent performance. The dataset in this work is from the UW-Madison GI Tract Image Segmentation Kaggle competition. The input images are obtained by applying a 2.5D preprocessing method on this dataset. We choose the EfficientNet-B4 and Swin Transformer (base) as the backbones of the UPerNet architecture separately. An average ensemble of these two models is subsequently implemented to boost the model performance. After applying the K-Fold cross validation, our method reaches a competition score 0.86827 on the private test set. With this performance, our team locates at the 135th place among 1548 teams and gets a bronze medal in the Kaggle competition. This work would accelerate the development of auxiliary systems for the segmentation of gastrointestinal tract images, and could potentially contribute to the research of generalized segmentation methods for medical images.
基于深度学习的胃肠道图像分割方法
当对胃肠道癌症患者进行放射治疗时,放射肿瘤学家必须手动勾画出胃和肠的位置,以便调整x射线束的方向。这个过程可以增加给肿瘤的剂量,同时避免胃和肠,但耗时且费力。因此,胃肠道图像的自动分割方法的发展将使患者更快,更有效的治疗。为此,本文提出了一种基于upernet的深度学习方法,对胃肠道图像中的胃、小肠和大肠进行分割,并取得了很好的效果。本工作的数据集来自UW-Madison胃肠道图像分割Kaggle竞赛。对该数据集采用2.5D预处理方法得到输入图像。我们分别选择了EfficientNet-B4和Swin Transformer(基础)作为UPerNet体系结构的主干。随后实现这两个模型的平均集成以提高模型的性能。应用K-Fold交叉验证后,我们的方法在私有测试集上达到了0.86827的竞争分数。凭借这样的表现,我们的队伍在1548支队伍中排名第135位,并在Kaggle比赛中获得铜牌。这项工作将加速胃肠道图像分割辅助系统的发展,并有可能为医学图像的广义分割方法的研究做出贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信