Using a Combination of LiDAR, RADAR, and Image Data for 3D Object Detection in Autonomous Vehicles

Ramin Sahba, Amin Sahba, F. Sahba
{"title":"Using a Combination of LiDAR, RADAR, and Image Data for 3D Object Detection in Autonomous Vehicles","authors":"Ramin Sahba, Amin Sahba, F. Sahba","doi":"10.1109/IEMCON51383.2020.9284930","DOIUrl":null,"url":null,"abstract":"One of the topics that is highly regarded and researched in the field of artificial intelligence and machine learning is object detection. Its use is especially important in autonomous vehicles. The various methods used to detect objects are based on different types of data, including image, radar, and lidar. Using a point clouds is one of the new methods for 3D object detection proposed in some recent work. One of the recently presented efficient methods is PointPillars network. It is an encoder that can learn from data available in a point cloud and then organize it as a representation in vertical columns (pillars). This representation can be used for 3D object detection. in this work, we try to develop a high performance model for 3D object detection based on PointPillars network exploiting a combination of lidar, radar, and image data to be used for autonomous vehicles perception. We use lidar, radar, and image data in nuScenes dataset to predict 3D boxes for three classes of objects that are car, pedestrian, and bus. To measure and compare results, we use nuScenes detection score (NDS) that is a combined metric for detection task. Results show that increasing the number of lidar sweeps, and combining them with radar and image data, significantly improve the performance of the 3D object detector. We suggest a method to combine different types of input data (lidar, radar, image) using a weighting system that can be used as the input for the encoder.","PeriodicalId":6871,"journal":{"name":"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"6 1","pages":"0427-0431"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEMCON51383.2020.9284930","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

One of the topics that is highly regarded and researched in the field of artificial intelligence and machine learning is object detection. Its use is especially important in autonomous vehicles. The various methods used to detect objects are based on different types of data, including image, radar, and lidar. Using a point clouds is one of the new methods for 3D object detection proposed in some recent work. One of the recently presented efficient methods is PointPillars network. It is an encoder that can learn from data available in a point cloud and then organize it as a representation in vertical columns (pillars). This representation can be used for 3D object detection. in this work, we try to develop a high performance model for 3D object detection based on PointPillars network exploiting a combination of lidar, radar, and image data to be used for autonomous vehicles perception. We use lidar, radar, and image data in nuScenes dataset to predict 3D boxes for three classes of objects that are car, pedestrian, and bus. To measure and compare results, we use nuScenes detection score (NDS) that is a combined metric for detection task. Results show that increasing the number of lidar sweeps, and combining them with radar and image data, significantly improve the performance of the 3D object detector. We suggest a method to combine different types of input data (lidar, radar, image) using a weighting system that can be used as the input for the encoder.
在自动驾驶汽车中使用激光雷达、雷达和图像数据的组合进行3D物体检测
对象检测是人工智能和机器学习领域中备受关注和研究的课题之一。它在自动驾驶汽车中的应用尤为重要。用于探测物体的各种方法基于不同类型的数据,包括图像、雷达和激光雷达。利用点云进行三维目标检测是近年来一些研究工作提出的一种新方法。最近提出的一种有效的方法是PointPillars网络。它是一种编码器,可以从点云中可用的数据中学习,然后将其组织为垂直列(柱)的表示。这种表示可以用于3D对象检测。在这项工作中,我们试图开发一个基于PointPillars网络的高性能3D物体检测模型,该模型利用激光雷达、雷达和图像数据的组合用于自动驾驶汽车感知。我们使用nuScenes数据集中的激光雷达、雷达和图像数据来预测汽车、行人和公共汽车这三类物体的3D盒子。为了测量和比较结果,我们使用nuScenes检测分数(NDS),这是检测任务的综合度量。结果表明,增加激光雷达扫描次数,并将其与雷达和图像数据相结合,可以显著提高三维目标探测器的性能。我们建议一种方法,结合不同类型的输入数据(激光雷达,雷达,图像)使用加权系统,可以用作输入的编码器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信