{"title":"Vision transformer enhanced with convolutional attention and graph convolution for semantic segmentation","authors":"Yongzhi Liu, Tongxin Yan","doi":"10.1016/j.imavis.2025.105633","DOIUrl":null,"url":null,"abstract":"<div><div>Semantic segmentation is a dense prediction task that assigns semantic labels to every pixel in an image. Effectively modeling global contextual information is a primary challenge in this task. Recently, some methods using Vision Transformer (ViT) encoders based on self-attention mechanisms have shown significant performance improvements. However, encoding spatial information purely through self-attention mechanisms tends to provide a more holistic representation and performs inadequately in handling object details. To address this, we propose a stripe depth-wise convolutional attention (SDCA) module. This module aggregates local convolution features at multiple scales as its attention map. Utilizing attention map generated by convolution at different scales effectively compensates for the limitations of self-attention mechanisms in handling object details. Additionally, to ensure the generation of more coherent predictions, we introduce a spatial feature graph convolution (SFGC) module to explicitly model the spatial relationships between patches. We apply these two modules in parallel to the output features of the Transformer block and add their output features to the original features for subsequent layer learning. Our method achieved mIoU scores of 50.5%, 59.1% and 55.0% on the COCO-Stuff-10K, PASCAL-Context and ADE20K datasets, respectively, surpassing some of the recent state-of-the-art methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"161 ","pages":"Article 105633"},"PeriodicalIF":4.2000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002215","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Semantic segmentation is a dense prediction task that assigns semantic labels to every pixel in an image. Effectively modeling global contextual information is a primary challenge in this task. Recently, some methods using Vision Transformer (ViT) encoders based on self-attention mechanisms have shown significant performance improvements. However, encoding spatial information purely through self-attention mechanisms tends to provide a more holistic representation and performs inadequately in handling object details. To address this, we propose a stripe depth-wise convolutional attention (SDCA) module. This module aggregates local convolution features at multiple scales as its attention map. Utilizing attention map generated by convolution at different scales effectively compensates for the limitations of self-attention mechanisms in handling object details. Additionally, to ensure the generation of more coherent predictions, we introduce a spatial feature graph convolution (SFGC) module to explicitly model the spatial relationships between patches. We apply these two modules in parallel to the output features of the Transformer block and add their output features to the original features for subsequent layer learning. Our method achieved mIoU scores of 50.5%, 59.1% and 55.0% on the COCO-Stuff-10K, PASCAL-Context and ADE20K datasets, respectively, surpassing some of the recent state-of-the-art methods.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.