{"title":"基于视觉注意的u型医学图像分割网络VA-TransUNet","authors":"Ting Jiang, Tao Xu, Xiaoning Li","doi":"10.1145/3581807.3581826","DOIUrl":null,"url":null,"abstract":"Abstract: Medical image segmentation is clinically important in medical diagnosis as it permits superior lesion detection in medical diagnosis to help physicians assist in treatment. Vision Transformer (ViT) has achieved remarkable results in computer vision and has been used for image segmentation tasks, but the potential in medical image segmentation remains largely unexplored with the special characteristics of medical images. Moreover, ViT based on multi-head self-attention (MSA) converts the image into a one-dimensional sequence, which destroys the two-dimensional structure of the image. Therefore, we propose VA-TransUNet, which combines the advantages of Transformer and Convolutional Neural Networks (CNN) to capture global and local contextual information and consider the features of channel dimensionality. Transformer based on visual attention is adopted, it is taken as the encoder, CNN is used as the decoder, and the image is directly fed into the Transformer. The key of visual attention is the large kernel attention (LKA), which is a depth-wise separable convolution that decomposes a large convolution into various convolutions. Experiment on Synapse of abdominal multi-organ (Synapse) and Automated Cardiac Diagnosis Challenge (ACDC) datasets demonstrate that we proposed VA-TransUNet outperforms the current the-state-of-art networks. The codes and trained models will be publicly and available at https://github.com/BeautySilly/VA-TransUNet.","PeriodicalId":292813,"journal":{"name":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VA-TransUNet: A U-shaped Medical Image Segmentation Network with Visual Attention\",\"authors\":\"Ting Jiang, Tao Xu, Xiaoning Li\",\"doi\":\"10.1145/3581807.3581826\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract: Medical image segmentation is clinically important in medical diagnosis as it permits superior lesion detection in medical diagnosis to help physicians assist in treatment. Vision Transformer (ViT) has achieved remarkable results in computer vision and has been used for image segmentation tasks, but the potential in medical image segmentation remains largely unexplored with the special characteristics of medical images. Moreover, ViT based on multi-head self-attention (MSA) converts the image into a one-dimensional sequence, which destroys the two-dimensional structure of the image. Therefore, we propose VA-TransUNet, which combines the advantages of Transformer and Convolutional Neural Networks (CNN) to capture global and local contextual information and consider the features of channel dimensionality. Transformer based on visual attention is adopted, it is taken as the encoder, CNN is used as the decoder, and the image is directly fed into the Transformer. The key of visual attention is the large kernel attention (LKA), which is a depth-wise separable convolution that decomposes a large convolution into various convolutions. Experiment on Synapse of abdominal multi-organ (Synapse) and Automated Cardiac Diagnosis Challenge (ACDC) datasets demonstrate that we proposed VA-TransUNet outperforms the current the-state-of-art networks. The codes and trained models will be publicly and available at https://github.com/BeautySilly/VA-TransUNet.\",\"PeriodicalId\":292813,\"journal\":{\"name\":\"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3581807.3581826\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581807.3581826","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VA-TransUNet: A U-shaped Medical Image Segmentation Network with Visual Attention
Abstract: Medical image segmentation is clinically important in medical diagnosis as it permits superior lesion detection in medical diagnosis to help physicians assist in treatment. Vision Transformer (ViT) has achieved remarkable results in computer vision and has been used for image segmentation tasks, but the potential in medical image segmentation remains largely unexplored with the special characteristics of medical images. Moreover, ViT based on multi-head self-attention (MSA) converts the image into a one-dimensional sequence, which destroys the two-dimensional structure of the image. Therefore, we propose VA-TransUNet, which combines the advantages of Transformer and Convolutional Neural Networks (CNN) to capture global and local contextual information and consider the features of channel dimensionality. Transformer based on visual attention is adopted, it is taken as the encoder, CNN is used as the decoder, and the image is directly fed into the Transformer. The key of visual attention is the large kernel attention (LKA), which is a depth-wise separable convolution that decomposes a large convolution into various convolutions. Experiment on Synapse of abdominal multi-organ (Synapse) and Automated Cardiac Diagnosis Challenge (ACDC) datasets demonstrate that we proposed VA-TransUNet outperforms the current the-state-of-art networks. The codes and trained models will be publicly and available at https://github.com/BeautySilly/VA-TransUNet.