TRNet: Two-Tier Recursion Network for Co-Salient Object Detection

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Runmin Cong;Ning Yang;Hongyu Liu;Dingwen Zhang;Qingming Huang;Sam Kwong;Wei Zhang
{"title":"TRNet: Two-Tier Recursion Network for Co-Salient Object Detection","authors":"Runmin Cong;Ning Yang;Hongyu Liu;Dingwen Zhang;Qingming Huang;Sam Kwong;Wei Zhang","doi":"10.1109/TCSVT.2025.3534908","DOIUrl":null,"url":null,"abstract":"Co-salient object detection (CoSOD) is to find the salient and recurring objects from a series of relevant images, where modeling inter-image relationships plays a crucial role. Different from the commonly used direct learning structure that inputs all the intra-image features into some well-designed modules to represent the inter-image relationship, we resort to adopting a recursive structure for inter-image modeling, and propose a two-tier recursion network (TRNet) to achieve CoSOD in this paper. The two-tier recursive structure of the proposed TRNet is embodied in two stages of inter-image extraction and distribution. On the one hand, considering the task adaptability and inter-image correlation, we design an inter-image exploration with recursive reinforcement module to learn the local and global inter-image correspondences, guaranteeing the validity and discriminativeness of the information in the step-by-step propagation. On the other hand, we design a dynamic recursion distribution module to fully exploit the role of inter-image correspondences in a recursive structure, adaptively assigning common attributes to each individual image through an improved semi-dynamic convolution. Experimental results on five prevailing CoSOD benchmarks demonstrate that our TRNet outperforms other competitors in terms of various evaluation metrics. The code and results of our method are available at <uri>https://github.com/rmcong/TRNet_TCSVT2025</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 6","pages":"5844-5857"},"PeriodicalIF":11.1000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10855555/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Co-salient object detection (CoSOD) is to find the salient and recurring objects from a series of relevant images, where modeling inter-image relationships plays a crucial role. Different from the commonly used direct learning structure that inputs all the intra-image features into some well-designed modules to represent the inter-image relationship, we resort to adopting a recursive structure for inter-image modeling, and propose a two-tier recursion network (TRNet) to achieve CoSOD in this paper. The two-tier recursive structure of the proposed TRNet is embodied in two stages of inter-image extraction and distribution. On the one hand, considering the task adaptability and inter-image correlation, we design an inter-image exploration with recursive reinforcement module to learn the local and global inter-image correspondences, guaranteeing the validity and discriminativeness of the information in the step-by-step propagation. On the other hand, we design a dynamic recursion distribution module to fully exploit the role of inter-image correspondences in a recursive structure, adaptively assigning common attributes to each individual image through an improved semi-dynamic convolution. Experimental results on five prevailing CoSOD benchmarks demonstrate that our TRNet outperforms other competitors in terms of various evaluation metrics. The code and results of our method are available at https://github.com/rmcong/TRNet_TCSVT2025.
TRNet:用于共显著目标检测的两层递归网络
共显著目标检测(CoSOD)是从一系列相关图像中发现显著和重复出现的目标,其中图像间关系的建模起着至关重要的作用。不同于通常使用的直接学习结构,即将图像内的所有特征输入到一些精心设计的模块中来表示图像间的关系,本文采用递归结构进行图像间建模,并提出了一种两层递归网络(TRNet)来实现CoSOD。本文提出的TRNet的两层递归结构体现在图像间提取和分布两个阶段。一方面,考虑到任务的适应性和图像间的相关性,我们设计了带有递归增强模块的图像间探索,学习局部和全局图像间的对应关系,保证了信息在分步传播过程中的有效性和判别性。另一方面,我们设计了一个动态递归分布模块,充分利用递归结构中图像间对应关系的作用,通过改进的半动态卷积自适应地为每个单独的图像分配公共属性。在五个流行的CoSOD基准测试上的实验结果表明,我们的TRNet在各种评估指标方面优于其他竞争对手。我们的方法的代码和结果可在https://github.com/rmcong/TRNet_TCSVT2025上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信