Fusing laser point cloud and visual image at data level using a new reconstruction algorithm

Lipu Zhou
{"title":"Fusing laser point cloud and visual image at data level using a new reconstruction algorithm","authors":"Lipu Zhou","doi":"10.1109/IVS.2013.6629655","DOIUrl":null,"url":null,"abstract":"Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.","PeriodicalId":251198,"journal":{"name":"2013 IEEE Intelligent Vehicles Symposium (IV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2013.6629655","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.
采用一种新的激光点云和视觉图像数据级融合重建算法
摄像头和激光雷达为机器人感知环境提供了补充信息。本文提出了一种在数据层面上融合激光点云和视觉信息的系统。通常,安装在无人地面车辆上的摄像机和激光雷达具有不同的视口。一些在激光雷达上可见的物体可能在相机上看不见。这将导致视觉图像的错误深度分配和激光点的不正确着色。系统的输入是彩色图像和相应的激光雷达数据。首先将三维激光点的坐标转换为相机坐标系。摄像机观看体积外的点被剪辑。利用激光雷达的先验结构,提出了一种将潜在可见激光点的目标表面重建为四边形网格的新算法。通过约束激光扫描迹线与给定激光点的径向夹角来消除假边,并对法向不一致的四边形进行修剪。此外,还解决了缺失的激光点,避免了重建网格中出现大孔。最后利用z-buffer算法进行遮挡推理。实验结果表明,我们的算法优于先前的算法。它可以为视觉图像分配正确的深度信息,并为相机可见的每个激光点提供准确的颜色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信