Qiule Sun, Jianxin Zhang, Bingbing Zhang, Peihua Li
{"title":"Instance-aware context with mutually guided vision-language attention for referring image segmentation","authors":"Qiule Sun, Jianxin Zhang, Bingbing Zhang, Peihua Li","doi":"10.1007/s10489-025-06851-1","DOIUrl":null,"url":null,"abstract":"<div><p>Referring image segmentation, which integrates both visual and linguistic modalities, represents a forefront challenge in cross-modal visual research. Traditional approaches generally fuse linguistic features with visual data to generate multi-modal representations for mask decoding. However, these methods often mistakenly segment visually prominent entities rather than the specific region indicated by the referring expression, as the visual context tends to overshadow the multi-modal features. To address this, we introduce IMNet, a novel referring image segmentation framework that harnesses the Contrastive Language-Image Pre-training (CLIP) model and incorporates a mutually guided vision-language attention mechanism to enhance accuracy in identifying the referring mask. Specifically, our mutually guided vision-language attention mechanism consists of language-guided attention and vision-guided attention, which model bi-directional relationships between vision and linguistic features. Additionally, to accurately segment instances based on referring expressions, we develop an instance-aware context module within the decoder that focuses on learning instance-specific features. This module connects instance prototypes with corresponding features, using linearly weighted prototypes for final prediction. We evaluate the proposed method on three publicly available datasets, i.e., RefCOCO, RefCOCO+, and G-Ref. Comparisons with previous methods demonstrates that our approach achieves competitive performance.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 13","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06851-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Referring image segmentation, which integrates both visual and linguistic modalities, represents a forefront challenge in cross-modal visual research. Traditional approaches generally fuse linguistic features with visual data to generate multi-modal representations for mask decoding. However, these methods often mistakenly segment visually prominent entities rather than the specific region indicated by the referring expression, as the visual context tends to overshadow the multi-modal features. To address this, we introduce IMNet, a novel referring image segmentation framework that harnesses the Contrastive Language-Image Pre-training (CLIP) model and incorporates a mutually guided vision-language attention mechanism to enhance accuracy in identifying the referring mask. Specifically, our mutually guided vision-language attention mechanism consists of language-guided attention and vision-guided attention, which model bi-directional relationships between vision and linguistic features. Additionally, to accurately segment instances based on referring expressions, we develop an instance-aware context module within the decoder that focuses on learning instance-specific features. This module connects instance prototypes with corresponding features, using linearly weighted prototypes for final prediction. We evaluate the proposed method on three publicly available datasets, i.e., RefCOCO, RefCOCO+, and G-Ref. Comparisons with previous methods demonstrates that our approach achieves competitive performance.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.