Siyuan Li, Lei Ke, Yung-Hsu Yang, Luigi Piccinelli, Mattia Segù, Martin Danelljan, Luc Van Gool
{"title":"SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking","authors":"Siyuan Li, Lei Ke, Yung-Hsu Yang, Luigi Piccinelli, Mattia Segù, Martin Danelljan, Luc Van Gool","doi":"arxiv-2409.11235","DOIUrl":null,"url":null,"abstract":"Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to\nnovel categories not in the training set. Currently, the best-performing\nmethods are mainly based on pure appearance matching. Due to the complexity of\nmotion patterns in the large-vocabulary scenarios and unstable classification\nof the novel objects, the motion and semantics cues are either ignored or\napplied based on heuristics in the final matching steps by existing methods. In\nthis paper, we present a unified framework SLAck that jointly considers\nsemantics, location, and appearance priors in the early steps of association\nand learns how to integrate all valuable information through a lightweight\nspatial and temporal object graph. Our method eliminates complex\npost-processing heuristics for fusing different cues and boosts the association\nperformance significantly for large-scale open-vocabulary tracking. Without\nbells and whistles, we outperform previous state-of-the-art methods for novel\nclasses tracking on the open-vocabulary MOT and TAO TETA benchmarks. Our code\nis available at\n\\href{https://github.com/siyuanliii/SLAck}{github.com/siyuanliii/SLAck}.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to
novel categories not in the training set. Currently, the best-performing
methods are mainly based on pure appearance matching. Due to the complexity of
motion patterns in the large-vocabulary scenarios and unstable classification
of the novel objects, the motion and semantics cues are either ignored or
applied based on heuristics in the final matching steps by existing methods. In
this paper, we present a unified framework SLAck that jointly considers
semantics, location, and appearance priors in the early steps of association
and learns how to integrate all valuable information through a lightweight
spatial and temporal object graph. Our method eliminates complex
post-processing heuristics for fusing different cues and boosts the association
performance significantly for large-scale open-vocabulary tracking. Without
bells and whistles, we outperform previous state-of-the-art methods for novel
classes tracking on the open-vocabulary MOT and TAO TETA benchmarks. Our code
is available at
\href{https://github.com/siyuanliii/SLAck}{github.com/siyuanliii/SLAck}.
开放词汇多目标跟踪(MOT)旨在将跟踪器泛化到训练集中没有的类别。目前,性能最好的方法主要基于纯外观匹配。由于大词汇量场景中运动模式的复杂性和新物体分类的不稳定性,现有方法在最后的匹配步骤中要么忽略运动和语义线索,要么根据启发式方法应用运动和语义线索。在本文中,我们提出了一个统一的框架 SLAck,该框架在联想的早期步骤中联合考虑了语义、位置和外观先验,并学习如何通过轻量级的空间和时间对象图整合所有有价值的信息。我们的方法消除了融合不同线索的复杂后处理启发式方法,显著提高了大规模开放词汇跟踪的关联性能。在开放词汇 MOT 和 TAO TETA 基准上,我们在新类别跟踪方面的性能超过了以前最先进的方法。我们的代码可在以下网址获取:href{https://github.com/siyuanliii/SLAck}{github.com/siyuanliii/SLAck}。