CAMTrack: a combined appearance-motion method for multiple-object tracking

IF 2.4 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Duy Cuong Bui, Ngan Linh Nguyen, Anh Hiep Hoang, Myungsik Yoo
{"title":"CAMTrack: a combined appearance-motion method for multiple-object tracking","authors":"Duy Cuong Bui, Ngan Linh Nguyen, Anh Hiep Hoang, Myungsik Yoo","doi":"10.1007/s00138-024-01548-w","DOIUrl":null,"url":null,"abstract":"<p>Object tracking has emerged as an essential process for various applications in the field of computer vision, such as autonomous driving. Recently, object tracking technology has experienced rapid growth, particularly its applications in self-driving vehicles. Tracking systems typically follow the detection-based tracking paradigm, which is affected by the detection results. Although deep learning has led to significant improvements in object detection, data association remains dependent on factors such as spatial location, motion, and appearance, to associate new observations with existing tracks. In this study, we introduce a novel approach called Combined Appearance-Motion Tracking (CAMTrack) to enhance data association by integrating object appearances and their corresponding movements. The proposed tracking method utilizes an appearance-motion model using an appearance-affinity network and an Interactive Multiple Model (IMM). We deploy the appearance model to address the visual affinity between objects across frames and employed the motion model to incorporate motion constraints to obtain robust position predictions under maneuvering movements. Moreover, we also propose a Two-phase association algorithm which is an effective way to recover lost tracks back from previous frames. CAMTrack was evaluated on the widely recognized object tracking benchmarks-KITTI and MOT17. The results showed the superior performance of the proposed method, highlighting its potential to contribute to advances in object tracking.</p>","PeriodicalId":51116,"journal":{"name":"Machine Vision and Applications","volume":"122 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Vision and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00138-024-01548-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Object tracking has emerged as an essential process for various applications in the field of computer vision, such as autonomous driving. Recently, object tracking technology has experienced rapid growth, particularly its applications in self-driving vehicles. Tracking systems typically follow the detection-based tracking paradigm, which is affected by the detection results. Although deep learning has led to significant improvements in object detection, data association remains dependent on factors such as spatial location, motion, and appearance, to associate new observations with existing tracks. In this study, we introduce a novel approach called Combined Appearance-Motion Tracking (CAMTrack) to enhance data association by integrating object appearances and their corresponding movements. The proposed tracking method utilizes an appearance-motion model using an appearance-affinity network and an Interactive Multiple Model (IMM). We deploy the appearance model to address the visual affinity between objects across frames and employed the motion model to incorporate motion constraints to obtain robust position predictions under maneuvering movements. Moreover, we also propose a Two-phase association algorithm which is an effective way to recover lost tracks back from previous frames. CAMTrack was evaluated on the widely recognized object tracking benchmarks-KITTI and MOT17. The results showed the superior performance of the proposed method, highlighting its potential to contribute to advances in object tracking.

Abstract Image

CAMTrack:用于多目标跟踪的外观与运动相结合的方法
物体跟踪已成为计算机视觉领域各种应用(如自动驾驶)的必要过程。最近,物体跟踪技术经历了快速发展,尤其是在自动驾驶汽车中的应用。跟踪系统通常遵循基于检测的跟踪范式,这受到检测结果的影响。虽然深度学习已在物体检测方面取得了重大改进,但数据关联仍然依赖于空间位置、运动和外观等因素,以便将新的观测结果与现有轨迹关联起来。在本研究中,我们引入了一种名为 "组合外观-运动跟踪(CAMTrack)"的新方法,通过整合物体外观及其相应的运动来增强数据关联。所提出的跟踪方法利用了一个外观-运动模型,该模型使用了一个外观-亲和网络和一个交互式多重模型(IMM)。我们利用外观模型来解决跨帧物体之间的视觉亲和性问题,并利用运动模型纳入运动约束,从而在机动运动中获得稳健的位置预测。此外,我们还提出了一种两阶段关联算法,这是一种从先前帧中恢复丢失轨迹的有效方法。CAMTrack 在广泛认可的物体跟踪基准--KITTI 和 MOT17 上进行了评估。结果表明,所提出的方法性能优越,凸显了其在促进物体追踪进步方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Machine Vision and Applications
Machine Vision and Applications 工程技术-工程:电子与电气
CiteScore
6.30
自引率
3.00%
发文量
84
审稿时长
8.7 months
期刊介绍: Machine Vision and Applications publishes high-quality technical contributions in machine vision research and development. Specifically, the editors encourage submittals in all applications and engineering aspects of image-related computing. In particular, original contributions dealing with scientific, commercial, industrial, military, and biomedical applications of machine vision, are all within the scope of the journal. Particular emphasis is placed on engineering and technology aspects of image processing and computer vision. The following aspects of machine vision applications are of interest: algorithms, architectures, VLSI implementations, AI techniques and expert systems for machine vision, front-end sensing, multidimensional and multisensor machine vision, real-time techniques, image databases, virtual reality and visualization. Papers must include a significant experimental validation component.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信