{"title":"Upernet-Based Deep Learning Method For The Segmentation Of Gastrointestinal Tract Images","authors":"Yang Qiu","doi":"10.1145/3599589.3599595","DOIUrl":null,"url":null,"abstract":"When giving radiation therapy to patients with gastrointestinal cancers, radiation oncologists must manually outline the locations of the stomach and intestines in order to adjust the direction of the X-ray beam. This process can increase the dose delivered to the tumor while avoiding the stomach and intestines, but is time-consuming and labor-intensive. Therefore, the development of automated segmentation methods for gastrointestinal tract images will enable faster and more effective treatment for patients. For that purpose, we propose an UPerNet-based deep learning approach in this paper, to segment the stomach, small bowel, and large bowel in gastrointestinal tract images with excellent performance. The dataset in this work is from the UW-Madison GI Tract Image Segmentation Kaggle competition. The input images are obtained by applying a 2.5D preprocessing method on this dataset. We choose the EfficientNet-B4 and Swin Transformer (base) as the backbones of the UPerNet architecture separately. An average ensemble of these two models is subsequently implemented to boost the model performance. After applying the K-Fold cross validation, our method reaches a competition score 0.86827 on the private test set. With this performance, our team locates at the 135th place among 1548 teams and gets a bronze medal in the Kaggle competition. This work would accelerate the development of auxiliary systems for the segmentation of gastrointestinal tract images, and could potentially contribute to the research of generalized segmentation methods for medical images.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3599589.3599595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When giving radiation therapy to patients with gastrointestinal cancers, radiation oncologists must manually outline the locations of the stomach and intestines in order to adjust the direction of the X-ray beam. This process can increase the dose delivered to the tumor while avoiding the stomach and intestines, but is time-consuming and labor-intensive. Therefore, the development of automated segmentation methods for gastrointestinal tract images will enable faster and more effective treatment for patients. For that purpose, we propose an UPerNet-based deep learning approach in this paper, to segment the stomach, small bowel, and large bowel in gastrointestinal tract images with excellent performance. The dataset in this work is from the UW-Madison GI Tract Image Segmentation Kaggle competition. The input images are obtained by applying a 2.5D preprocessing method on this dataset. We choose the EfficientNet-B4 and Swin Transformer (base) as the backbones of the UPerNet architecture separately. An average ensemble of these two models is subsequently implemented to boost the model performance. After applying the K-Fold cross validation, our method reaches a competition score 0.86827 on the private test set. With this performance, our team locates at the 135th place among 1548 teams and gets a bronze medal in the Kaggle competition. This work would accelerate the development of auxiliary systems for the segmentation of gastrointestinal tract images, and could potentially contribute to the research of generalized segmentation methods for medical images.