Tianpeng Liu;Jing Li;Amin Beheshti;Jia Wu;Jun Chang;Beihang Song;Lezhi Lian
{"title":"HEART: Historically Information Embedding and Subspace Re-Weighting Transformer-Based Tracking","authors":"Tianpeng Liu;Jing Li;Amin Beheshti;Jia Wu;Jun Chang;Beihang Song;Lezhi Lian","doi":"10.1109/TBDATA.2024.3423672","DOIUrl":null,"url":null,"abstract":"Transformers-based trackers offer significant potential for integrating semantic interdependence between template and search features in tracking tasks. Transformers possess inherent capabilities for processing long sequences and extracting correlations within them. Several researchers have explored the feasibility of incorporating Transformers to model continuously changing search areas in tracking tasks. However, their approach has substantially increased the computational cost of an already resource-intensive Transformer. Additionally, existing Transformers-based trackers rely solely on mechanically employing multi-head attention to obtain representations in different subspaces, without any inherent bias. To address these challenges, we propose HEART (Historical Information Embedding And Subspace Re-weighting Tracker). Our method embeds historical information into the queries in a lightweight and Markovian manner to extract discriminative attention maps for robust tracking. Furthermore, we develop a multi-head attention distribution mechanism to retrieve the most promising subspace weights for tracking tasks. HEART has demonstrated its effectiveness on five datasets, including OTB-100, LaSOT, UAV123, TrackingNet, and GOT-10k.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 2","pages":"566-577"},"PeriodicalIF":7.5000,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Big Data","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10587120/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Transformers-based trackers offer significant potential for integrating semantic interdependence between template and search features in tracking tasks. Transformers possess inherent capabilities for processing long sequences and extracting correlations within them. Several researchers have explored the feasibility of incorporating Transformers to model continuously changing search areas in tracking tasks. However, their approach has substantially increased the computational cost of an already resource-intensive Transformer. Additionally, existing Transformers-based trackers rely solely on mechanically employing multi-head attention to obtain representations in different subspaces, without any inherent bias. To address these challenges, we propose HEART (Historical Information Embedding And Subspace Re-weighting Tracker). Our method embeds historical information into the queries in a lightweight and Markovian manner to extract discriminative attention maps for robust tracking. Furthermore, we develop a multi-head attention distribution mechanism to retrieve the most promising subspace weights for tracking tasks. HEART has demonstrated its effectiveness on five datasets, including OTB-100, LaSOT, UAV123, TrackingNet, and GOT-10k.
期刊介绍:
The IEEE Transactions on Big Data publishes peer-reviewed articles focusing on big data. These articles present innovative research ideas and application results across disciplines, including novel theories, algorithms, and applications. Research areas cover a wide range, such as big data analytics, visualization, curation, management, semantics, infrastructure, standards, performance analysis, intelligence extraction, scientific discovery, security, privacy, and legal issues specific to big data. The journal also prioritizes applications of big data in fields generating massive datasets.