Instance-aware context with mutually guided vision-language attention for referring image segmentation

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qiule Sun, Jianxin Zhang, Bingbing Zhang, Peihua Li
{"title":"Instance-aware context with mutually guided vision-language attention for referring image segmentation","authors":"Qiule Sun,&nbsp;Jianxin Zhang,&nbsp;Bingbing Zhang,&nbsp;Peihua Li","doi":"10.1007/s10489-025-06851-1","DOIUrl":null,"url":null,"abstract":"<div><p>Referring image segmentation, which integrates both visual and linguistic modalities, represents a forefront challenge in cross-modal visual research. Traditional approaches generally fuse linguistic features with visual data to generate multi-modal representations for mask decoding. However, these methods often mistakenly segment visually prominent entities rather than the specific region indicated by the referring expression, as the visual context tends to overshadow the multi-modal features. To address this, we introduce IMNet, a novel referring image segmentation framework that harnesses the Contrastive Language-Image Pre-training (CLIP) model and incorporates a mutually guided vision-language attention mechanism to enhance accuracy in identifying the referring mask. Specifically, our mutually guided vision-language attention mechanism consists of language-guided attention and vision-guided attention, which model bi-directional relationships between vision and linguistic features. Additionally, to accurately segment instances based on referring expressions, we develop an instance-aware context module within the decoder that focuses on learning instance-specific features. This module connects instance prototypes with corresponding features, using linearly weighted prototypes for final prediction. We evaluate the proposed method on three publicly available datasets, i.e., RefCOCO, RefCOCO+, and G-Ref. Comparisons with previous methods demonstrates that our approach achieves competitive performance.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 13","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06851-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Referring image segmentation, which integrates both visual and linguistic modalities, represents a forefront challenge in cross-modal visual research. Traditional approaches generally fuse linguistic features with visual data to generate multi-modal representations for mask decoding. However, these methods often mistakenly segment visually prominent entities rather than the specific region indicated by the referring expression, as the visual context tends to overshadow the multi-modal features. To address this, we introduce IMNet, a novel referring image segmentation framework that harnesses the Contrastive Language-Image Pre-training (CLIP) model and incorporates a mutually guided vision-language attention mechanism to enhance accuracy in identifying the referring mask. Specifically, our mutually guided vision-language attention mechanism consists of language-guided attention and vision-guided attention, which model bi-directional relationships between vision and linguistic features. Additionally, to accurately segment instances based on referring expressions, we develop an instance-aware context module within the decoder that focuses on learning instance-specific features. This module connects instance prototypes with corresponding features, using linearly weighted prototypes for final prediction. We evaluate the proposed method on three publicly available datasets, i.e., RefCOCO, RefCOCO+, and G-Ref. Comparisons with previous methods demonstrates that our approach achieves competitive performance.

具有相互引导的视觉语言注意力的实例感知上下文,用于参考图像分割
参考图像分割是跨模态视觉研究的一个前沿挑战,它融合了视觉和语言两种模式。传统方法通常将语言特征与视觉数据融合,生成多模态表示用于掩码解码。然而,这些方法往往会错误地分割视觉上突出的实体,而不是所指表达所指示的特定区域,因为视觉上下文往往掩盖了多模态特征。为了解决这个问题,我们引入了一种新的参考图像分割框架IMNet,它利用对比语言-图像预训练(CLIP)模型,并结合了一个相互引导的视觉-语言注意机制,以提高识别参考蒙版的准确性。具体来说,我们的相互引导的视觉-语言注意机制包括语言引导注意和视觉引导注意,它们模拟了视觉和语言特征之间的双向关系。此外,为了基于引用表达式准确地分割实例,我们在解码器中开发了一个实例感知上下文模块,该模块专注于学习实例特定的特征。该模块将实例原型与相应的特征连接起来,使用线性加权原型进行最终预测。我们在三个公开可用的数据集RefCOCO、RefCOCO+和G-Ref上对所提出的方法进行了评估。与以往方法的比较表明,我们的方法达到了具有竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信