{"title":"BTMTrack:通过双模板桥接和时间模态候选消除实现稳健的RGB-T跟踪","authors":"Zhongxuan Zhang, Bi Zeng, Xinyu Ni, Yimin Du","doi":"10.1016/j.imavis.2025.105676","DOIUrl":null,"url":null,"abstract":"<div><div>RGB-T tracking leverages the complementary strengths of RGB and thermal infrared (TIR) modalities to handle challenging scenarios, such as low illumination and adverse weather conditions. However, existing methods often struggle to effectively integrate temporal information and perform efficient cross-modal interactions, limiting their adaptability to dynamic targets. In this paper, we propose BTMTrack, a novel RGB-T tracking framework. At its core lies a dual-template backbone and a Temporal-Modal Candidate Elimination (TMCE) strategy. The dual-template backbone enables the effective integration of temporal information. At the same time, the TMCE strategy guides the model to focus on target-relevant tokens by evaluating temporal and modal correlations through attention correlation maps across different modalities. This not only reduces computational overhead but also mitigates the influence of irrelevant background noise. Building on this foundation, we introduce the Temporal Dual-Template Bridging (TDTB) module, which utilizes a cross-modal attention mechanism to process dynamically filtered tokens, thereby enhancing precise cross-modal fusion. This approach further strengthens the interaction between templates and the search region. Extensive experiments conducted on three benchmark datasets demonstrate the effectiveness of BTMTrack. Our method achieves state-of-the-art performance, with a 72.3% precision rate on the LasHeR test set and competitive results on the RGBT210 and RGBT234 datasets.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105676"},"PeriodicalIF":4.2000,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BTMTrack: Robust RGB-T tracking via dual-template bridging and temporal-modal candidate elimination\",\"authors\":\"Zhongxuan Zhang, Bi Zeng, Xinyu Ni, Yimin Du\",\"doi\":\"10.1016/j.imavis.2025.105676\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>RGB-T tracking leverages the complementary strengths of RGB and thermal infrared (TIR) modalities to handle challenging scenarios, such as low illumination and adverse weather conditions. However, existing methods often struggle to effectively integrate temporal information and perform efficient cross-modal interactions, limiting their adaptability to dynamic targets. In this paper, we propose BTMTrack, a novel RGB-T tracking framework. At its core lies a dual-template backbone and a Temporal-Modal Candidate Elimination (TMCE) strategy. The dual-template backbone enables the effective integration of temporal information. At the same time, the TMCE strategy guides the model to focus on target-relevant tokens by evaluating temporal and modal correlations through attention correlation maps across different modalities. This not only reduces computational overhead but also mitigates the influence of irrelevant background noise. Building on this foundation, we introduce the Temporal Dual-Template Bridging (TDTB) module, which utilizes a cross-modal attention mechanism to process dynamically filtered tokens, thereby enhancing precise cross-modal fusion. This approach further strengthens the interaction between templates and the search region. Extensive experiments conducted on three benchmark datasets demonstrate the effectiveness of BTMTrack. Our method achieves state-of-the-art performance, with a 72.3% precision rate on the LasHeR test set and competitive results on the RGBT210 and RGBT234 datasets.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"162 \",\"pages\":\"Article 105676\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625002641\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002641","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
BTMTrack: Robust RGB-T tracking via dual-template bridging and temporal-modal candidate elimination
RGB-T tracking leverages the complementary strengths of RGB and thermal infrared (TIR) modalities to handle challenging scenarios, such as low illumination and adverse weather conditions. However, existing methods often struggle to effectively integrate temporal information and perform efficient cross-modal interactions, limiting their adaptability to dynamic targets. In this paper, we propose BTMTrack, a novel RGB-T tracking framework. At its core lies a dual-template backbone and a Temporal-Modal Candidate Elimination (TMCE) strategy. The dual-template backbone enables the effective integration of temporal information. At the same time, the TMCE strategy guides the model to focus on target-relevant tokens by evaluating temporal and modal correlations through attention correlation maps across different modalities. This not only reduces computational overhead but also mitigates the influence of irrelevant background noise. Building on this foundation, we introduce the Temporal Dual-Template Bridging (TDTB) module, which utilizes a cross-modal attention mechanism to process dynamically filtered tokens, thereby enhancing precise cross-modal fusion. This approach further strengthens the interaction between templates and the search region. Extensive experiments conducted on three benchmark datasets demonstrate the effectiveness of BTMTrack. Our method achieves state-of-the-art performance, with a 72.3% precision rate on the LasHeR test set and competitive results on the RGBT210 and RGBT234 datasets.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.