Wei Jianjian, Gang Li, Kan He, Pengbo Li, Ling Zhang, Ronghua Wang
{"title":"MCSC-UTNet: Honeycomb lung segmentation algorithm based on Separable Vision Transformer and context feature fusion","authors":"Wei Jianjian, Gang Li, Kan He, Pengbo Li, Ling Zhang, Ronghua Wang","doi":"10.1145/3590003.3590093","DOIUrl":null,"url":null,"abstract":"Abstract: Due to the problems of more noise and lower contrast in X-ray tomography images of the honeycomb lung, and the poor generalization of current medical segmentation algorithms, the segmentation results are unsatisfactory. We propose an automatic segmentation algorithm MCSC-UTNet based on SepViT with contextual feature fusion for honeycomb lung lesions to address these problems. Firstly, a Multi-scale Channel Shuffle Convolution (MCSC) module is constructed to enhance the interaction between different image channels and extract the local lesion feature at different scales. Then, a Separable Vision Transformer (SepViT) module is introduced at the bottleneck layer of the network to enhance the representation of the global information of the lesion. Finally, we add a context-aware fusion module to relearn the encoder feature and strengthen the contextual relevance of the encoder and decoder. In comparison experiments with eight prevalent segmentation models on the honeycomb lung dataset, the segmentation metrics of this method, Jaccard coefficient, mIoU, and DSC are 90.85%, 95.32%, and 95.07%, with Jaccard coefficient improving by 3.56% compared with that before. Compared with medical segmentation models such as TransUNet, Sharp U-Net, and SETR, this paper's method has improved results and segmentation performance.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590003.3590093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract: Due to the problems of more noise and lower contrast in X-ray tomography images of the honeycomb lung, and the poor generalization of current medical segmentation algorithms, the segmentation results are unsatisfactory. We propose an automatic segmentation algorithm MCSC-UTNet based on SepViT with contextual feature fusion for honeycomb lung lesions to address these problems. Firstly, a Multi-scale Channel Shuffle Convolution (MCSC) module is constructed to enhance the interaction between different image channels and extract the local lesion feature at different scales. Then, a Separable Vision Transformer (SepViT) module is introduced at the bottleneck layer of the network to enhance the representation of the global information of the lesion. Finally, we add a context-aware fusion module to relearn the encoder feature and strengthen the contextual relevance of the encoder and decoder. In comparison experiments with eight prevalent segmentation models on the honeycomb lung dataset, the segmentation metrics of this method, Jaccard coefficient, mIoU, and DSC are 90.85%, 95.32%, and 95.07%, with Jaccard coefficient improving by 3.56% compared with that before. Compared with medical segmentation models such as TransUNet, Sharp U-Net, and SETR, this paper's method has improved results and segmentation performance.