LiDAR - Stereo Camera Fusion for Accurate Depth Estimation

Hafeez Husain Cholakkal, S. Mentasti, M. Bersani, S. Arrigoni, M. Matteucci, F. Cheli
{"title":"LiDAR - Stereo Camera Fusion for Accurate Depth Estimation","authors":"Hafeez Husain Cholakkal, S. Mentasti, M. Bersani, S. Arrigoni, M. Matteucci, F. Cheli","doi":"10.23919/AEITAUTOMOTIVE50086.2020.9307398","DOIUrl":null,"url":null,"abstract":"Dense 3D reconstruction of the surrounding environment is one the fundamental way of perception for Advanced Driver-Assistance Systems (ADAS). In this field, accurate 3D modeling finds applications in many areas like obstacle detection, object tracking, and remote driving. This task can be performed with different sensors like cameras, LiDARs, and radars. Each one presents some advantages and disadvantages based on the precision of the depth, the sensor cost, and the accuracy in adverse weather conditions. For this reason, many researchers have explored the fusion of multiple sources to overcome each sensor limit and provide an accurate representation of the vehicle’s surroundings. This paper proposes a novel post-processing method for accurate depth estimation, based on a patch-wise depth correction approach, to fuse data from LiDAR and stereo camera. This solution allows for accurate edges and object boundaries preservation in multiple challenging scenarios.","PeriodicalId":104806,"journal":{"name":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/AEITAUTOMOTIVE50086.2020.9307398","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Dense 3D reconstruction of the surrounding environment is one the fundamental way of perception for Advanced Driver-Assistance Systems (ADAS). In this field, accurate 3D modeling finds applications in many areas like obstacle detection, object tracking, and remote driving. This task can be performed with different sensors like cameras, LiDARs, and radars. Each one presents some advantages and disadvantages based on the precision of the depth, the sensor cost, and the accuracy in adverse weather conditions. For this reason, many researchers have explored the fusion of multiple sources to overcome each sensor limit and provide an accurate representation of the vehicle’s surroundings. This paper proposes a novel post-processing method for accurate depth estimation, based on a patch-wise depth correction approach, to fuse data from LiDAR and stereo camera. This solution allows for accurate edges and object boundaries preservation in multiple challenging scenarios.
激光雷达-立体相机融合准确的深度估计
对周围环境进行密集的三维重建是高级驾驶辅助系统(ADAS)的基本感知方式之一。在这一领域,精确的3D建模在障碍物检测、目标跟踪和远程驾驶等许多领域都有应用。这项任务可以用不同的传感器来完成,比如摄像头、激光雷达和雷达。基于深度精度、传感器成本和恶劣天气条件下的精度,每种方法都有各自的优缺点。出于这个原因,许多研究人员已经探索了多源融合,以克服每个传感器的限制,并提供车辆周围环境的准确表示。本文提出了一种新的基于逐块深度校正的深度估计后处理方法,将激光雷达和立体相机的数据融合在一起。该解决方案允许在多个具有挑战性的场景中保持精确的边缘和对象边界。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信