Fanghua Hong;Wanyu Wang;Andong Lu;Lei Liu;Qunjing Wang
{"title":"基于多路径曼巴融合网络的高效rbt跟踪","authors":"Fanghua Hong;Wanyu Wang;Andong Lu;Lei Liu;Qunjing Wang","doi":"10.1109/LSP.2025.3563123","DOIUrl":null,"url":null,"abstract":"RGBT tracking aims to fully exploit the complementary advantages of visible and infrared modalities to achieve robust tracking, thus the design of multimodal fusion network is crucial. However, existing methods typically adopt CNNs or Transformer networks to construct the fusion network, which poses a challenge in achieving a balance between performance and efficiency. To overcome this issue, we introduce an innovative visual state space (VSS) model, represented by Mamba, for RGBT tracking. In particular, we design a novel multi-path Mamba fusion network that achieves robust multimodal fusion capability while maintaining a linear overhead. First, we design a multi-path Mamba layer to sufficiently fuse two modalities in both global and local perspectives. Second, to alleviate the issue of inadequate VSS modeling in the channel dimension, we introduce a simple yet effective channel swapping layer. Extensive experiments conducted on four public RGBT tracking datasets demonstrate that our method surpasses existing state-of-the-art trackers. Notably, our fusion method achieves higher tracking performance compared to the well-known Transformer-based fusion approach (TBSI), while also achieving 92.8% and 80.5% reductions in parameter count and computational cost, respectively.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1790-1794"},"PeriodicalIF":3.2000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient RGBT Tracking via Multi-Path Mamba Fusion Network\",\"authors\":\"Fanghua Hong;Wanyu Wang;Andong Lu;Lei Liu;Qunjing Wang\",\"doi\":\"10.1109/LSP.2025.3563123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGBT tracking aims to fully exploit the complementary advantages of visible and infrared modalities to achieve robust tracking, thus the design of multimodal fusion network is crucial. However, existing methods typically adopt CNNs or Transformer networks to construct the fusion network, which poses a challenge in achieving a balance between performance and efficiency. To overcome this issue, we introduce an innovative visual state space (VSS) model, represented by Mamba, for RGBT tracking. In particular, we design a novel multi-path Mamba fusion network that achieves robust multimodal fusion capability while maintaining a linear overhead. First, we design a multi-path Mamba layer to sufficiently fuse two modalities in both global and local perspectives. Second, to alleviate the issue of inadequate VSS modeling in the channel dimension, we introduce a simple yet effective channel swapping layer. Extensive experiments conducted on four public RGBT tracking datasets demonstrate that our method surpasses existing state-of-the-art trackers. Notably, our fusion method achieves higher tracking performance compared to the well-known Transformer-based fusion approach (TBSI), while also achieving 92.8% and 80.5% reductions in parameter count and computational cost, respectively.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"32 \",\"pages\":\"1790-1794\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10971948/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10971948/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Efficient RGBT Tracking via Multi-Path Mamba Fusion Network
RGBT tracking aims to fully exploit the complementary advantages of visible and infrared modalities to achieve robust tracking, thus the design of multimodal fusion network is crucial. However, existing methods typically adopt CNNs or Transformer networks to construct the fusion network, which poses a challenge in achieving a balance between performance and efficiency. To overcome this issue, we introduce an innovative visual state space (VSS) model, represented by Mamba, for RGBT tracking. In particular, we design a novel multi-path Mamba fusion network that achieves robust multimodal fusion capability while maintaining a linear overhead. First, we design a multi-path Mamba layer to sufficiently fuse two modalities in both global and local perspectives. Second, to alleviate the issue of inadequate VSS modeling in the channel dimension, we introduce a simple yet effective channel swapping layer. Extensive experiments conducted on four public RGBT tracking datasets demonstrate that our method surpasses existing state-of-the-art trackers. Notably, our fusion method achieves higher tracking performance compared to the well-known Transformer-based fusion approach (TBSI), while also achieving 92.8% and 80.5% reductions in parameter count and computational cost, respectively.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.