{"title":"具有混频意识的 RGB-T 跟踪","authors":"Lei Lei, Xianxian Li","doi":"10.1016/j.imavis.2024.105330","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, impressive progress has been made with transformer-based RGB-T trackers due to the transformer’s effectiveness in capturing low-frequency information (i.e., high-level semantic information). However, some studies have revealed that the transformer exhibits limitations in capturing high-frequency information (i.e., low-level texture and edge details), thereby restricting the tracker’s capacity to precisely match target details within the search area. To address this issue, we propose a frequency hybrid awareness modeling RGB-T tracker, abbreviated as FHAT. Specifically, FHAT combines the advantages of convolution and maximum pooling in capturing high-frequency information on the architecture of transformer. In this way, it strengthens the high-frequency features and enhances the model’s perception of detailed information. Additionally, to enhance the complementary effect between the two modalities, the tracker utilizes low-frequency information from both modalities for modality interaction, which can avoid interaction errors caused by inconsistent local details of the multimodality. Furthermore, these high-frequency information and interaction low-frequency information will then be fused, allowing the model to adaptively enhance the frequency features of the modal expression. Through extensive experiments on two mainstream RGB-T tracking benchmarks, our method demonstrates competitive performance.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"152 ","pages":"Article 105330"},"PeriodicalIF":4.2000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RGB-T tracking with frequency hybrid awareness\",\"authors\":\"Lei Lei, Xianxian Li\",\"doi\":\"10.1016/j.imavis.2024.105330\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recently, impressive progress has been made with transformer-based RGB-T trackers due to the transformer’s effectiveness in capturing low-frequency information (i.e., high-level semantic information). However, some studies have revealed that the transformer exhibits limitations in capturing high-frequency information (i.e., low-level texture and edge details), thereby restricting the tracker’s capacity to precisely match target details within the search area. To address this issue, we propose a frequency hybrid awareness modeling RGB-T tracker, abbreviated as FHAT. Specifically, FHAT combines the advantages of convolution and maximum pooling in capturing high-frequency information on the architecture of transformer. In this way, it strengthens the high-frequency features and enhances the model’s perception of detailed information. Additionally, to enhance the complementary effect between the two modalities, the tracker utilizes low-frequency information from both modalities for modality interaction, which can avoid interaction errors caused by inconsistent local details of the multimodality. Furthermore, these high-frequency information and interaction low-frequency information will then be fused, allowing the model to adaptively enhance the frequency features of the modal expression. Through extensive experiments on two mainstream RGB-T tracking benchmarks, our method demonstrates competitive performance.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"152 \",\"pages\":\"Article 105330\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885624004359\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624004359","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Recently, impressive progress has been made with transformer-based RGB-T trackers due to the transformer’s effectiveness in capturing low-frequency information (i.e., high-level semantic information). However, some studies have revealed that the transformer exhibits limitations in capturing high-frequency information (i.e., low-level texture and edge details), thereby restricting the tracker’s capacity to precisely match target details within the search area. To address this issue, we propose a frequency hybrid awareness modeling RGB-T tracker, abbreviated as FHAT. Specifically, FHAT combines the advantages of convolution and maximum pooling in capturing high-frequency information on the architecture of transformer. In this way, it strengthens the high-frequency features and enhances the model’s perception of detailed information. Additionally, to enhance the complementary effect between the two modalities, the tracker utilizes low-frequency information from both modalities for modality interaction, which can avoid interaction errors caused by inconsistent local details of the multimodality. Furthermore, these high-frequency information and interaction low-frequency information will then be fused, allowing the model to adaptively enhance the frequency features of the modal expression. Through extensive experiments on two mainstream RGB-T tracking benchmarks, our method demonstrates competitive performance.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.