Camera-LiDAR Fusion Based Three-Stages Data Association Framework for 3D Multi-Object Tracking

Zeguo Fu, Huiliang Shang, Liang Song, Zengwen Li, Changxue Chen
{"title":"Camera-LiDAR Fusion Based Three-Stages Data Association Framework for 3D Multi-Object Tracking","authors":"Zeguo Fu, Huiliang Shang, Liang Song, Zengwen Li, Changxue Chen","doi":"10.1109/INSAI56792.2022.00037","DOIUrl":null,"url":null,"abstract":"3D multi-object tracking (MOT) ensures safe and efficient motion planning and vehicle navigation and plays an important role in perception systems in autonomous driving. Currently MOT is divided into tracking by detection and end-to-end, but most of them are tracking by detection using only single depth sensor such as LiDAR to detect and track objects. However, LiDAR has the limitation of not being able to obtain information about the appearance of the object due to the lack of pixel information, which can lead to obtaining inaccurate detection results thus leading to erratic tracking results. Therefore, in this paper, we propose a novel 3D MOT framework that combines the unique detection advantages of cameras and LiDAR. To avoid the IDs generated by the early death of the detection of the same object that produced low scores in successive frames, we design a 3D MOT framework with three-stages data association. And we also design a data association metric based on 3D IoU and Mahalanobis distance. The camera-LiDAR fusion-based 3D MOT framework we propose proves its superiority and flexibility by quantitative experiments and ablation experiments.","PeriodicalId":318264,"journal":{"name":"2022 2nd International Conference on Networking Systems of AI (INSAI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Networking Systems of AI (INSAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INSAI56792.2022.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

3D multi-object tracking (MOT) ensures safe and efficient motion planning and vehicle navigation and plays an important role in perception systems in autonomous driving. Currently MOT is divided into tracking by detection and end-to-end, but most of them are tracking by detection using only single depth sensor such as LiDAR to detect and track objects. However, LiDAR has the limitation of not being able to obtain information about the appearance of the object due to the lack of pixel information, which can lead to obtaining inaccurate detection results thus leading to erratic tracking results. Therefore, in this paper, we propose a novel 3D MOT framework that combines the unique detection advantages of cameras and LiDAR. To avoid the IDs generated by the early death of the detection of the same object that produced low scores in successive frames, we design a 3D MOT framework with three-stages data association. And we also design a data association metric based on 3D IoU and Mahalanobis distance. The camera-LiDAR fusion-based 3D MOT framework we propose proves its superiority and flexibility by quantitative experiments and ablation experiments.
基于相机-激光雷达融合的三维多目标跟踪三阶段数据关联框架
三维多目标跟踪(MOT)是安全高效的运动规划和车辆导航的保障,在自动驾驶感知系统中发挥着重要作用。目前MOT分为探测跟踪和端到端跟踪,但大多数都是利用激光雷达等单一深度传感器来探测和跟踪物体。然而,激光雷达的局限性在于,由于缺乏像素信息,无法获得物体的外观信息,这可能导致获得的检测结果不准确,从而导致跟踪结果不稳定。因此,在本文中,我们提出了一种新的3D MOT框架,该框架结合了相机和激光雷达的独特检测优势。为了避免在连续的帧中由于同一目标的检测过早死亡而产生低分数的id,我们设计了一个具有三阶段数据关联的3D MOT框架。我们还设计了一个基于三维IoU和马氏距离的数据关联度量。本文提出的基于摄像头-激光雷达融合的三维MOT框架,通过定量实验和烧蚀实验证明了其优越性和灵活性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信