基于融合数据的自动驾驶汽车交通场景三维车辆跟踪

Can Chen, L. Z. Fragonara, A. Tsourdos
{"title":"基于融合数据的自动驾驶汽车交通场景三维车辆跟踪","authors":"Can Chen, L. Z. Fragonara, A. Tsourdos","doi":"10.5220/0007674203120318","DOIUrl":null,"url":null,"abstract":"Car tracking in a traffic environment is a crucial task for the autonomous vehicle. Through tracking, a self-driving car is capable of predicting each car’s motion and trajectory in the traffic scene, which is one of the key components for traffic scene understanding. Currently, 2D vision-based object tracking is still the most popular method, however, multiple sensory data (e.g. cameras, Lidar, Radar) can provide more information (geometric and color features) about surroundings and show significant advantages for tracking. We present a 3D car tracking method that combines more data from different sensors (cameras, Lidar, GPS/IMU) to track static and dynamic cars in a 3D bounding box. Fed by the images and 3D point cloud, a 3D car detector and the spatial transform module are firstly applied to estimate current location, dimensions, and orientation of each surrounding car in each frame in the 3D world coordinate system, followed by a 3D Kalman filter to predict the location, dimensions, orientation and velocity for each corresponding car in the next time. The predictions from Kalman filtering are used for re-identifying previously detected cars in the next frame using the Hungarian algorithm. We conduct experiments on the KITTI benchmark to evaluate tracking performance and the effectiveness of our method.","PeriodicalId":218840,"journal":{"name":"International Conference on Vehicle Technology and Intelligent Transport Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3D Car Tracking using Fused Data in Traffic Scenes for Autonomous Vehicle\",\"authors\":\"Can Chen, L. Z. Fragonara, A. Tsourdos\",\"doi\":\"10.5220/0007674203120318\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Car tracking in a traffic environment is a crucial task for the autonomous vehicle. Through tracking, a self-driving car is capable of predicting each car’s motion and trajectory in the traffic scene, which is one of the key components for traffic scene understanding. Currently, 2D vision-based object tracking is still the most popular method, however, multiple sensory data (e.g. cameras, Lidar, Radar) can provide more information (geometric and color features) about surroundings and show significant advantages for tracking. We present a 3D car tracking method that combines more data from different sensors (cameras, Lidar, GPS/IMU) to track static and dynamic cars in a 3D bounding box. Fed by the images and 3D point cloud, a 3D car detector and the spatial transform module are firstly applied to estimate current location, dimensions, and orientation of each surrounding car in each frame in the 3D world coordinate system, followed by a 3D Kalman filter to predict the location, dimensions, orientation and velocity for each corresponding car in the next time. The predictions from Kalman filtering are used for re-identifying previously detected cars in the next frame using the Hungarian algorithm. We conduct experiments on the KITTI benchmark to evaluate tracking performance and the effectiveness of our method.\",\"PeriodicalId\":218840,\"journal\":{\"name\":\"International Conference on Vehicle Technology and Intelligent Transport Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Vehicle Technology and Intelligent Transport Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0007674203120318\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Vehicle Technology and Intelligent Transport Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0007674203120318","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在交通环境中跟踪车辆是自动驾驶汽车的一项关键任务。通过跟踪,自动驾驶汽车能够预测交通场景中每辆车的运动和轨迹,这是交通场景理解的关键组成部分之一。目前,基于二维视觉的目标跟踪仍然是最流行的方法,然而,多种感官数据(如摄像头、激光雷达、雷达)可以提供更多关于周围环境的信息(几何和颜色特征),并显示出显著的跟踪优势。我们提出了一种3D汽车跟踪方法,该方法结合了来自不同传感器(摄像头、激光雷达、GPS/IMU)的更多数据,在3D边界框中跟踪静态和动态汽车。以图像和三维点云为馈源,首先利用三维汽车检测器和空间变换模块在三维世界坐标系中估计每一帧中周围每一辆汽车的当前位置、尺寸和方向,然后利用三维卡尔曼滤波器预测下一帧中每一辆对应汽车的位置、尺寸、方向和速度。卡尔曼滤波的预测用于在下一帧中使用匈牙利算法重新识别先前检测到的汽车。我们在KITTI基准上进行了实验,以评估我们的方法的跟踪性能和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
3D Car Tracking using Fused Data in Traffic Scenes for Autonomous Vehicle
Car tracking in a traffic environment is a crucial task for the autonomous vehicle. Through tracking, a self-driving car is capable of predicting each car’s motion and trajectory in the traffic scene, which is one of the key components for traffic scene understanding. Currently, 2D vision-based object tracking is still the most popular method, however, multiple sensory data (e.g. cameras, Lidar, Radar) can provide more information (geometric and color features) about surroundings and show significant advantages for tracking. We present a 3D car tracking method that combines more data from different sensors (cameras, Lidar, GPS/IMU) to track static and dynamic cars in a 3D bounding box. Fed by the images and 3D point cloud, a 3D car detector and the spatial transform module are firstly applied to estimate current location, dimensions, and orientation of each surrounding car in each frame in the 3D world coordinate system, followed by a 3D Kalman filter to predict the location, dimensions, orientation and velocity for each corresponding car in the next time. The predictions from Kalman filtering are used for re-identifying previously detected cars in the next frame using the Hungarian algorithm. We conduct experiments on the KITTI benchmark to evaluate tracking performance and the effectiveness of our method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信