{"title":"I3Net: RGB-T显著目标检测的密集信息交互网络","authors":"Jia Hou , Hongfa Wen , Shuai Wang , Chenggang Yan","doi":"10.1016/j.imavis.2025.105525","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modality salient object detection (SOD) is receiving more and more attention in recent years. Infrared thermal images can provide useful information in extreme situations, such as low illumination and cluttered background. Accompany with extra information, we need a more delicate design to properly integrate multi-modal and multi-scale clues. In this paper, we propose an intensively information interaction network (I<sup>3</sup>Net) to perform Red-Green-Blue and Thermal (RGB-T) SOD, which optimizes the performance through modality interaction, level interaction, and scale interaction. Firstly, feature channels from different sources are dynamically selected according to the modality interaction with dynamic merging module. Then, adjacent level interaction is conducted under the guidance of coordinate channel and spatial attention with spatial feature aggregation module. Finally, we deploy pyramid attention module to obtain a more comprehensive scale interaction. Extensive experiments on four RGB-T datasets, VT821, VT1000, VT5000 and VI-RGBT3500, show that the proposed I<sup>3</sup>Net achieves a competitive and excellent performance against 13 state-of-the-art methods in multiple evaluation metrics, with a 1.70%, 1.41%, and 1.54% improvement in terms of weighted F-measure, mean E-measure, and S-measure.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"158 ","pages":"Article 105525"},"PeriodicalIF":4.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"I3Net: Intensive information interaction network for RGB-T salient object detection\",\"authors\":\"Jia Hou , Hongfa Wen , Shuai Wang , Chenggang Yan\",\"doi\":\"10.1016/j.imavis.2025.105525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-modality salient object detection (SOD) is receiving more and more attention in recent years. Infrared thermal images can provide useful information in extreme situations, such as low illumination and cluttered background. Accompany with extra information, we need a more delicate design to properly integrate multi-modal and multi-scale clues. In this paper, we propose an intensively information interaction network (I<sup>3</sup>Net) to perform Red-Green-Blue and Thermal (RGB-T) SOD, which optimizes the performance through modality interaction, level interaction, and scale interaction. Firstly, feature channels from different sources are dynamically selected according to the modality interaction with dynamic merging module. Then, adjacent level interaction is conducted under the guidance of coordinate channel and spatial attention with spatial feature aggregation module. Finally, we deploy pyramid attention module to obtain a more comprehensive scale interaction. Extensive experiments on four RGB-T datasets, VT821, VT1000, VT5000 and VI-RGBT3500, show that the proposed I<sup>3</sup>Net achieves a competitive and excellent performance against 13 state-of-the-art methods in multiple evaluation metrics, with a 1.70%, 1.41%, and 1.54% improvement in terms of weighted F-measure, mean E-measure, and S-measure.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"158 \",\"pages\":\"Article 105525\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625001131\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625001131","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
I3Net: Intensive information interaction network for RGB-T salient object detection
Multi-modality salient object detection (SOD) is receiving more and more attention in recent years. Infrared thermal images can provide useful information in extreme situations, such as low illumination and cluttered background. Accompany with extra information, we need a more delicate design to properly integrate multi-modal and multi-scale clues. In this paper, we propose an intensively information interaction network (I3Net) to perform Red-Green-Blue and Thermal (RGB-T) SOD, which optimizes the performance through modality interaction, level interaction, and scale interaction. Firstly, feature channels from different sources are dynamically selected according to the modality interaction with dynamic merging module. Then, adjacent level interaction is conducted under the guidance of coordinate channel and spatial attention with spatial feature aggregation module. Finally, we deploy pyramid attention module to obtain a more comprehensive scale interaction. Extensive experiments on four RGB-T datasets, VT821, VT1000, VT5000 and VI-RGBT3500, show that the proposed I3Net achieves a competitive and excellent performance against 13 state-of-the-art methods in multiple evaluation metrics, with a 1.70%, 1.41%, and 1.54% improvement in terms of weighted F-measure, mean E-measure, and S-measure.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.