基于目标姿态估计的无目标多摄像头-激光雷达外部标定

Byung-Hyun Yoon, Hyeonwoo Jeong, Kang-Sun Choi
{"title":"基于目标姿态估计的无目标多摄像头-激光雷达外部标定","authors":"Byung-Hyun Yoon, Hyeonwoo Jeong, Kang-Sun Choi","doi":"10.1109/ICRA48506.2021.9560936","DOIUrl":null,"url":null,"abstract":"We propose a targetless method for calibrating the extrinsic parameters among multiple cameras and a LiDAR sensor using object pose estimation. Contrast to previous targetless methods requiring certain geometric features, the proposed method exploits any objects of unspecified shapes in the scene to estimate the calibration parameters in single-scan configuration. Semantic objects in the scene are initially segmented from each modal measurement. Using multiple images, a 3D point cloud is reconstructed up-to-scale. By registering the up-to-scale point cloud to the LiDAR point cloud, we achieve an initial calibration and find correspondences between point cloud segments and image object segments. For each point cloud segment, a 3D mesh model is reconstructed. Based on the correspondence information, the color appearance model for the mesh can be elaborately generated with corresponding object instance segment within the images. Starting from the initial calibration, the calibration is gradually refined by using an object pose estimation technique with the appearance models associated with the 3D mesh models. The experimental results confirmed that the proposed framework achieves multimodal calibrations successfully in a single shot. The proposed method can be effectively applied for extrinsic calibration for plenoptic imaging systems of dozens of cameras in single-scan configuration without specific targets.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Targetless Multiple Camera-LiDAR Extrinsic Calibration using Object Pose Estimation\",\"authors\":\"Byung-Hyun Yoon, Hyeonwoo Jeong, Kang-Sun Choi\",\"doi\":\"10.1109/ICRA48506.2021.9560936\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a targetless method for calibrating the extrinsic parameters among multiple cameras and a LiDAR sensor using object pose estimation. Contrast to previous targetless methods requiring certain geometric features, the proposed method exploits any objects of unspecified shapes in the scene to estimate the calibration parameters in single-scan configuration. Semantic objects in the scene are initially segmented from each modal measurement. Using multiple images, a 3D point cloud is reconstructed up-to-scale. By registering the up-to-scale point cloud to the LiDAR point cloud, we achieve an initial calibration and find correspondences between point cloud segments and image object segments. For each point cloud segment, a 3D mesh model is reconstructed. Based on the correspondence information, the color appearance model for the mesh can be elaborately generated with corresponding object instance segment within the images. Starting from the initial calibration, the calibration is gradually refined by using an object pose estimation technique with the appearance models associated with the 3D mesh models. The experimental results confirmed that the proposed framework achieves multimodal calibrations successfully in a single shot. The proposed method can be effectively applied for extrinsic calibration for plenoptic imaging systems of dozens of cameras in single-scan configuration without specific targets.\",\"PeriodicalId\":108312,\"journal\":{\"name\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA48506.2021.9560936\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48506.2021.9560936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

我们提出了一种无目标的方法,用于标定多相机和激光雷达传感器之间的外部参数。与以往需要特定几何特征的无目标方法相比,该方法利用场景中任何形状未确定的物体来估计单次扫描配置下的校准参数。场景中的语义对象首先从每个模态测量中分割出来。利用多幅图像,按比例重建三维点云。通过将点云配准到激光雷达点云,实现初始标定,并找到点云段与图像目标段之间的对应关系。对于每个点云段,重建一个三维网格模型。基于对应信息,可以在图像中精心生成网格的颜色外观模型和相应的对象实例段。从初始标定开始,利用物体姿态估计技术,结合与三维网格模型相关联的外观模型,逐步细化标定。实验结果表明,该框架在单次射击中成功地实现了多模态标定。该方法可有效地应用于无特定目标的单扫描全光学成像系统的外部标定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Targetless Multiple Camera-LiDAR Extrinsic Calibration using Object Pose Estimation
We propose a targetless method for calibrating the extrinsic parameters among multiple cameras and a LiDAR sensor using object pose estimation. Contrast to previous targetless methods requiring certain geometric features, the proposed method exploits any objects of unspecified shapes in the scene to estimate the calibration parameters in single-scan configuration. Semantic objects in the scene are initially segmented from each modal measurement. Using multiple images, a 3D point cloud is reconstructed up-to-scale. By registering the up-to-scale point cloud to the LiDAR point cloud, we achieve an initial calibration and find correspondences between point cloud segments and image object segments. For each point cloud segment, a 3D mesh model is reconstructed. Based on the correspondence information, the color appearance model for the mesh can be elaborately generated with corresponding object instance segment within the images. Starting from the initial calibration, the calibration is gradually refined by using an object pose estimation technique with the appearance models associated with the 3D mesh models. The experimental results confirmed that the proposed framework achieves multimodal calibrations successfully in a single shot. The proposed method can be effectively applied for extrinsic calibration for plenoptic imaging systems of dozens of cameras in single-scan configuration without specific targets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信