Synergistic-aware cascaded association and trajectory refinement for multi-object tracking

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hui Li, Su Qin, Saiyu Li, Ying Gao, Yanli Wu
{"title":"Synergistic-aware cascaded association and trajectory refinement for multi-object tracking","authors":"Hui Li,&nbsp;Su Qin,&nbsp;Saiyu Li,&nbsp;Ying Gao,&nbsp;Yanli Wu","doi":"10.1016/j.imavis.2025.105695","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-object tracking (MOT) is a pivotal research area in computer vision. Effectively tracking objects in scenarios with frequent occlusions and crowded scenes has become a key challenge in MOT tasks. Existing tracking-by-detection (TbD) methods often rely on simple two-frame association techniques. However, in situations involving scale transformation or requiring long-term association, frequent occlusion between objects can lead to ID switches, especially in scenes with dense or highly intersecting objects. Therefore, we propose a synergistic-aware cascaded association and trajectory refinement method (SCTrack) for multi-object tracking. In the data association stage, we propose a synergistic-aware cascaded association method to construct a multi-perception affinity matrix for object association, and introduce the multi-frame collaborative distance calculation to enhance the robustness. To address the problem of trajectory fragmentation, we propose a dynamic confidence-driven trajectory refinement post-processing method. This method integrates confidence and feature information to calculate trajectory association, repair fragmented trajectories, and improve the overall robustness of the tracking algorithm. Extensive experiments on the MOT17, MOT20, and DanceTrack datasets validate SCTrack’s competitive performance.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105695"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002835","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-object tracking (MOT) is a pivotal research area in computer vision. Effectively tracking objects in scenarios with frequent occlusions and crowded scenes has become a key challenge in MOT tasks. Existing tracking-by-detection (TbD) methods often rely on simple two-frame association techniques. However, in situations involving scale transformation or requiring long-term association, frequent occlusion between objects can lead to ID switches, especially in scenes with dense or highly intersecting objects. Therefore, we propose a synergistic-aware cascaded association and trajectory refinement method (SCTrack) for multi-object tracking. In the data association stage, we propose a synergistic-aware cascaded association method to construct a multi-perception affinity matrix for object association, and introduce the multi-frame collaborative distance calculation to enhance the robustness. To address the problem of trajectory fragmentation, we propose a dynamic confidence-driven trajectory refinement post-processing method. This method integrates confidence and feature information to calculate trajectory association, repair fragmented trajectories, and improve the overall robustness of the tracking algorithm. Extensive experiments on the MOT17, MOT20, and DanceTrack datasets validate SCTrack’s competitive performance.
多目标跟踪的协同感知级联关联与轨迹优化
多目标跟踪(MOT)是计算机视觉中的一个关键研究领域。在频繁遮挡和拥挤场景下,如何有效地跟踪目标已成为MOT任务中的关键挑战。现有的检测跟踪(TbD)方法通常依赖于简单的两帧关联技术。然而,在涉及尺度转换或需要长期关联的情况下,物体之间频繁的遮挡会导致ID切换,特别是在物体密集或高度相交的场景中。因此,我们提出了一种协同感知级联关联和轨迹优化方法(SCTrack)用于多目标跟踪。在数据关联阶段,提出了一种协同感知级联关联方法,构建多感知关联矩阵进行对象关联,并引入多帧协同距离计算来增强鲁棒性。为解决弹道碎片化问题,提出了一种动态置信度驱动的弹道细化后处理方法。该方法结合置信度和特征信息计算轨迹关联,修复碎片化轨迹,提高跟踪算法的整体鲁棒性。在MOT17、MOT20和DanceTrack数据集上进行的大量实验验证了SCTrack的竞争性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信