Zhihao Ren , Shengning Lu , Xinhua Wang , Yaoming Liu , Yong Liang
{"title":"MSCA: A few-shot segmentation framework driven by multi-scale cross-attention and information extraction","authors":"Zhihao Ren , Shengning Lu , Xinhua Wang , Yaoming Liu , Yong Liang","doi":"10.1016/j.cviu.2025.104419","DOIUrl":null,"url":null,"abstract":"<div><div>Few-Shot Semantic Segmentation (FSS) aims to achieve precise pixel-level segmentation of target objects in query images using only a small number of annotated support images. The main challenge lies in effectively capturing and transferring critical information from support samples while establishing fine-grained semantic associations between query and support images to improve segmentation accuracy. However, existing methods struggle with spatial alignment issues caused by intra-class variations and inter-class visual similarities, and they fail to fully integrate high-level and low-level decoder features. To address these limitations, we propose a novel framework based on cross-scale interactive attention mechanisms. This framework employs a hybrid mask-guided multi-scale feature fusion strategy, constructing a cross-scale attention network that spans from local details to global context. It dynamically enhances target region representation and alleviates spatial misalignment issues. Furthermore, we design a hierarchical multi-axis decoding architecture that progressively integrates multi-resolution feature pathways, enabling the model to focus on semantic associations within foreground regions. Experimental results show that our Multi-Scale Cross-Attention (MSCA) model performs exceptionally well on the PASCAL-5i and COCO-20i benchmark datasets, achieving highly competitive results. Notably, the model contains only 1.86 million learnable parameters, demonstrating its efficiency and practical applicability.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"259 ","pages":"Article 104419"},"PeriodicalIF":4.3000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314225001420","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Few-Shot Semantic Segmentation (FSS) aims to achieve precise pixel-level segmentation of target objects in query images using only a small number of annotated support images. The main challenge lies in effectively capturing and transferring critical information from support samples while establishing fine-grained semantic associations between query and support images to improve segmentation accuracy. However, existing methods struggle with spatial alignment issues caused by intra-class variations and inter-class visual similarities, and they fail to fully integrate high-level and low-level decoder features. To address these limitations, we propose a novel framework based on cross-scale interactive attention mechanisms. This framework employs a hybrid mask-guided multi-scale feature fusion strategy, constructing a cross-scale attention network that spans from local details to global context. It dynamically enhances target region representation and alleviates spatial misalignment issues. Furthermore, we design a hierarchical multi-axis decoding architecture that progressively integrates multi-resolution feature pathways, enabling the model to focus on semantic associations within foreground regions. Experimental results show that our Multi-Scale Cross-Attention (MSCA) model performs exceptionally well on the PASCAL-5i and COCO-20i benchmark datasets, achieving highly competitive results. Notably, the model contains only 1.86 million learnable parameters, demonstrating its efficiency and practical applicability.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems