Gang Chen;Feng Shao;Xiongli Chai;Hangwei Chen;Qiuping Jiang;Xiangchao Meng;Yo-Sung Ho
{"title":"RGB-D和RGB-T显著目标检测的模态诱导转移融合网络","authors":"Gang Chen;Feng Shao;Xiongli Chai;Hangwei Chen;Qiuping Jiang;Xiangchao Meng;Yo-Sung Ho","doi":"10.1109/TCSVT.2022.3215979","DOIUrl":null,"url":null,"abstract":"The ability of capturing the complementary information of multi-modality data is critical to the development of multi-modality salient object detection (SOD). Most of existing studies attempt to integrate multi-modality information through various fusion strategies. However, most of these methods ignore the inherent differences in multi-modality data, resulting in poor performance when dealing with some challenging scenarios. In this paper, we propose a novel Modality-Induced Transfer-Fusion Network (MITF-Net) for RGB-D and RGB-T SOD by fully exploring the complementarity in multi-modality data. Specifically, we first deploy a modality transfer fusion (MTF) module to bridge the semantic gap between single and multi-modality data, and then mine the cross-modality complementarity based on point-to-point structural similarity information. Then, we design a cycle-separated attention (CSA) module to optimize the cross-layer information recurrently, and measure the effectiveness of cross-layer features through point-wise convolution-based multi-scale channel attention. Furthermore, we refine the boundaries in the decoding stage to obtain high-quality saliency maps with sharp boundaries. Extensive experiments on 13 RGB-D and RGB-T SOD datasets show that the proposed MITF-Net achieves a competitive and excellent performance.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"33 4","pages":"1787-1801"},"PeriodicalIF":8.3000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Modality-Induced Transfer-Fusion Network for RGB-D and RGB-T Salient Object Detection\",\"authors\":\"Gang Chen;Feng Shao;Xiongli Chai;Hangwei Chen;Qiuping Jiang;Xiangchao Meng;Yo-Sung Ho\",\"doi\":\"10.1109/TCSVT.2022.3215979\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ability of capturing the complementary information of multi-modality data is critical to the development of multi-modality salient object detection (SOD). Most of existing studies attempt to integrate multi-modality information through various fusion strategies. However, most of these methods ignore the inherent differences in multi-modality data, resulting in poor performance when dealing with some challenging scenarios. In this paper, we propose a novel Modality-Induced Transfer-Fusion Network (MITF-Net) for RGB-D and RGB-T SOD by fully exploring the complementarity in multi-modality data. Specifically, we first deploy a modality transfer fusion (MTF) module to bridge the semantic gap between single and multi-modality data, and then mine the cross-modality complementarity based on point-to-point structural similarity information. Then, we design a cycle-separated attention (CSA) module to optimize the cross-layer information recurrently, and measure the effectiveness of cross-layer features through point-wise convolution-based multi-scale channel attention. Furthermore, we refine the boundaries in the decoding stage to obtain high-quality saliency maps with sharp boundaries. Extensive experiments on 13 RGB-D and RGB-T SOD datasets show that the proposed MITF-Net achieves a competitive and excellent performance.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"33 4\",\"pages\":\"1787-1801\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2022-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/9925217/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/9925217/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Modality-Induced Transfer-Fusion Network for RGB-D and RGB-T Salient Object Detection
The ability of capturing the complementary information of multi-modality data is critical to the development of multi-modality salient object detection (SOD). Most of existing studies attempt to integrate multi-modality information through various fusion strategies. However, most of these methods ignore the inherent differences in multi-modality data, resulting in poor performance when dealing with some challenging scenarios. In this paper, we propose a novel Modality-Induced Transfer-Fusion Network (MITF-Net) for RGB-D and RGB-T SOD by fully exploring the complementarity in multi-modality data. Specifically, we first deploy a modality transfer fusion (MTF) module to bridge the semantic gap between single and multi-modality data, and then mine the cross-modality complementarity based on point-to-point structural similarity information. Then, we design a cycle-separated attention (CSA) module to optimize the cross-layer information recurrently, and measure the effectiveness of cross-layer features through point-wise convolution-based multi-scale channel attention. Furthermore, we refine the boundaries in the decoding stage to obtain high-quality saliency maps with sharp boundaries. Extensive experiments on 13 RGB-D and RGB-T SOD datasets show that the proposed MITF-Net achieves a competitive and excellent performance.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.