Fei Pan , Lianyu Zhao , Chenglin Wang , Chunlei Du , Xiaolei Zhao
{"title":"MSTGT: Multi-scale spatio-temporal guidance for visual tracking","authors":"Fei Pan , Lianyu Zhao , Chenglin Wang , Chunlei Du , Xiaolei Zhao","doi":"10.1016/j.neucom.2025.131583","DOIUrl":null,"url":null,"abstract":"<div><div>Addressing the challenge of target tracking in complex scenarios with limited data samples is a highly significant research endeavor. Nevertheless, most trackers primarily concentrate on intricate model architectures or template updating strategies, overlooking the depth of training sample exploitation and the efficient utilization of spatio-temporal target information. To alleviate the above problem, we propose a novel visual tracking framework tailored for complex scenarios, named MSTGT, which integrates mixed data sampling with multi-scale spatio-temporal guidance. Specifically, we innovatively employ a video sequence sampling and feature mixing strategy to simulate complex scenarios, enhancing the representation of video sequences. Concurrently, our multi-scale visual cue encoder harnesses multi-scale target information to fortify feature representation and cue construction. Furthermore, our multi-scale spatio-temporal guidance encoder, a groundbreaking approach, seamlessly integrates spatial and temporal dimensions with multi-scale information, precisely guiding the prediction of target trajectories. This not only bolsters the handling of intricate motion patterns but also circumvents the need for intricate online updating strategies. MSTGT achieves SOTA performance on six benchmarks, while running at real-time speed. Code is available at <span><span>https://github.com/capf-2011/MSTGT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"657 ","pages":"Article 131583"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225022556","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Addressing the challenge of target tracking in complex scenarios with limited data samples is a highly significant research endeavor. Nevertheless, most trackers primarily concentrate on intricate model architectures or template updating strategies, overlooking the depth of training sample exploitation and the efficient utilization of spatio-temporal target information. To alleviate the above problem, we propose a novel visual tracking framework tailored for complex scenarios, named MSTGT, which integrates mixed data sampling with multi-scale spatio-temporal guidance. Specifically, we innovatively employ a video sequence sampling and feature mixing strategy to simulate complex scenarios, enhancing the representation of video sequences. Concurrently, our multi-scale visual cue encoder harnesses multi-scale target information to fortify feature representation and cue construction. Furthermore, our multi-scale spatio-temporal guidance encoder, a groundbreaking approach, seamlessly integrates spatial and temporal dimensions with multi-scale information, precisely guiding the prediction of target trajectories. This not only bolsters the handling of intricate motion patterns but also circumvents the need for intricate online updating strategies. MSTGT achieves SOTA performance on six benchmarks, while running at real-time speed. Code is available at https://github.com/capf-2011/MSTGT.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.