MCSC-UTNet:基于可分离视觉转换器和上下文特征融合的蜂窝肺分割算法

Wei Jianjian, Gang Li, Kan He, Pengbo Li, Ling Zhang, Ronghua Wang
{"title":"MCSC-UTNet:基于可分离视觉转换器和上下文特征融合的蜂窝肺分割算法","authors":"Wei Jianjian, Gang Li, Kan He, Pengbo Li, Ling Zhang, Ronghua Wang","doi":"10.1145/3590003.3590093","DOIUrl":null,"url":null,"abstract":"Abstract: Due to the problems of more noise and lower contrast in X-ray tomography images of the honeycomb lung, and the poor generalization of current medical segmentation algorithms, the segmentation results are unsatisfactory. We propose an automatic segmentation algorithm MCSC-UTNet based on SepViT with contextual feature fusion for honeycomb lung lesions to address these problems. Firstly, a Multi-scale Channel Shuffle Convolution (MCSC) module is constructed to enhance the interaction between different image channels and extract the local lesion feature at different scales. Then, a Separable Vision Transformer (SepViT) module is introduced at the bottleneck layer of the network to enhance the representation of the global information of the lesion. Finally, we add a context-aware fusion module to relearn the encoder feature and strengthen the contextual relevance of the encoder and decoder. In comparison experiments with eight prevalent segmentation models on the honeycomb lung dataset, the segmentation metrics of this method, Jaccard coefficient, mIoU, and DSC are 90.85%, 95.32%, and 95.07%, with Jaccard coefficient improving by 3.56% compared with that before. Compared with medical segmentation models such as TransUNet, Sharp U-Net, and SETR, this paper's method has improved results and segmentation performance.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MCSC-UTNet: Honeycomb lung segmentation algorithm based on Separable Vision Transformer and context feature fusion\",\"authors\":\"Wei Jianjian, Gang Li, Kan He, Pengbo Li, Ling Zhang, Ronghua Wang\",\"doi\":\"10.1145/3590003.3590093\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract: Due to the problems of more noise and lower contrast in X-ray tomography images of the honeycomb lung, and the poor generalization of current medical segmentation algorithms, the segmentation results are unsatisfactory. We propose an automatic segmentation algorithm MCSC-UTNet based on SepViT with contextual feature fusion for honeycomb lung lesions to address these problems. Firstly, a Multi-scale Channel Shuffle Convolution (MCSC) module is constructed to enhance the interaction between different image channels and extract the local lesion feature at different scales. Then, a Separable Vision Transformer (SepViT) module is introduced at the bottleneck layer of the network to enhance the representation of the global information of the lesion. Finally, we add a context-aware fusion module to relearn the encoder feature and strengthen the contextual relevance of the encoder and decoder. In comparison experiments with eight prevalent segmentation models on the honeycomb lung dataset, the segmentation metrics of this method, Jaccard coefficient, mIoU, and DSC are 90.85%, 95.32%, and 95.07%, with Jaccard coefficient improving by 3.56% compared with that before. Compared with medical segmentation models such as TransUNet, Sharp U-Net, and SETR, this paper's method has improved results and segmentation performance.\",\"PeriodicalId\":340225,\"journal\":{\"name\":\"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3590003.3590093\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590003.3590093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

摘要:蜂窝状肺x线体层摄影图像存在噪声较大、对比度较低的问题,且目前医学分割算法泛化程度较差,导致分割效果不理想。为了解决这些问题,我们提出了一种基于SepViT和上下文特征融合的蜂窝状肺病变自动分割算法MCSC-UTNet。首先,构建多尺度通道Shuffle卷积(Multi-scale Channel Shuffle Convolution, MCSC)模块,增强不同图像通道之间的交互作用,提取不同尺度的局部病灶特征;然后,在网络的瓶颈层引入可分离视觉变压器(SepViT)模块,增强病变全局信息的表示;最后,我们增加了一个上下文感知融合模块来重新学习编码器特征,并加强编码器和解码器的上下文相关性。在蜂窝肺数据集上与8种常用分割模型的对比实验中,该方法的分割指标、Jaccard系数、mIoU和DSC分别为90.85%、95.32%和95.07%,Jaccard系数比原方法提高了3.56%。与TransUNet、Sharp U-Net、SETR等医学分割模型相比,本文方法的分割结果和分割性能都有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MCSC-UTNet: Honeycomb lung segmentation algorithm based on Separable Vision Transformer and context feature fusion
Abstract: Due to the problems of more noise and lower contrast in X-ray tomography images of the honeycomb lung, and the poor generalization of current medical segmentation algorithms, the segmentation results are unsatisfactory. We propose an automatic segmentation algorithm MCSC-UTNet based on SepViT with contextual feature fusion for honeycomb lung lesions to address these problems. Firstly, a Multi-scale Channel Shuffle Convolution (MCSC) module is constructed to enhance the interaction between different image channels and extract the local lesion feature at different scales. Then, a Separable Vision Transformer (SepViT) module is introduced at the bottleneck layer of the network to enhance the representation of the global information of the lesion. Finally, we add a context-aware fusion module to relearn the encoder feature and strengthen the contextual relevance of the encoder and decoder. In comparison experiments with eight prevalent segmentation models on the honeycomb lung dataset, the segmentation metrics of this method, Jaccard coefficient, mIoU, and DSC are 90.85%, 95.32%, and 95.07%, with Jaccard coefficient improving by 3.56% compared with that before. Compared with medical segmentation models such as TransUNet, Sharp U-Net, and SETR, this paper's method has improved results and segmentation performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信