{"title":"TRNet: Two-Tier Recursion Network for Co-Salient Object Detection","authors":"Runmin Cong;Ning Yang;Hongyu Liu;Dingwen Zhang;Qingming Huang;Sam Kwong;Wei Zhang","doi":"10.1109/TCSVT.2025.3534908","DOIUrl":null,"url":null,"abstract":"Co-salient object detection (CoSOD) is to find the salient and recurring objects from a series of relevant images, where modeling inter-image relationships plays a crucial role. Different from the commonly used direct learning structure that inputs all the intra-image features into some well-designed modules to represent the inter-image relationship, we resort to adopting a recursive structure for inter-image modeling, and propose a two-tier recursion network (TRNet) to achieve CoSOD in this paper. The two-tier recursive structure of the proposed TRNet is embodied in two stages of inter-image extraction and distribution. On the one hand, considering the task adaptability and inter-image correlation, we design an inter-image exploration with recursive reinforcement module to learn the local and global inter-image correspondences, guaranteeing the validity and discriminativeness of the information in the step-by-step propagation. On the other hand, we design a dynamic recursion distribution module to fully exploit the role of inter-image correspondences in a recursive structure, adaptively assigning common attributes to each individual image through an improved semi-dynamic convolution. Experimental results on five prevailing CoSOD benchmarks demonstrate that our TRNet outperforms other competitors in terms of various evaluation metrics. The code and results of our method are available at <uri>https://github.com/rmcong/TRNet_TCSVT2025</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 6","pages":"5844-5857"},"PeriodicalIF":11.1000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10855555/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Co-salient object detection (CoSOD) is to find the salient and recurring objects from a series of relevant images, where modeling inter-image relationships plays a crucial role. Different from the commonly used direct learning structure that inputs all the intra-image features into some well-designed modules to represent the inter-image relationship, we resort to adopting a recursive structure for inter-image modeling, and propose a two-tier recursion network (TRNet) to achieve CoSOD in this paper. The two-tier recursive structure of the proposed TRNet is embodied in two stages of inter-image extraction and distribution. On the one hand, considering the task adaptability and inter-image correlation, we design an inter-image exploration with recursive reinforcement module to learn the local and global inter-image correspondences, guaranteeing the validity and discriminativeness of the information in the step-by-step propagation. On the other hand, we design a dynamic recursion distribution module to fully exploit the role of inter-image correspondences in a recursive structure, adaptively assigning common attributes to each individual image through an improved semi-dynamic convolution. Experimental results on five prevailing CoSOD benchmarks demonstrate that our TRNet outperforms other competitors in terms of various evaluation metrics. The code and results of our method are available at https://github.com/rmcong/TRNet_TCSVT2025.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.