{"title":"基于目标姿态估计的无目标多摄像头-激光雷达外部标定","authors":"Byung-Hyun Yoon, Hyeonwoo Jeong, Kang-Sun Choi","doi":"10.1109/ICRA48506.2021.9560936","DOIUrl":null,"url":null,"abstract":"We propose a targetless method for calibrating the extrinsic parameters among multiple cameras and a LiDAR sensor using object pose estimation. Contrast to previous targetless methods requiring certain geometric features, the proposed method exploits any objects of unspecified shapes in the scene to estimate the calibration parameters in single-scan configuration. Semantic objects in the scene are initially segmented from each modal measurement. Using multiple images, a 3D point cloud is reconstructed up-to-scale. By registering the up-to-scale point cloud to the LiDAR point cloud, we achieve an initial calibration and find correspondences between point cloud segments and image object segments. For each point cloud segment, a 3D mesh model is reconstructed. Based on the correspondence information, the color appearance model for the mesh can be elaborately generated with corresponding object instance segment within the images. Starting from the initial calibration, the calibration is gradually refined by using an object pose estimation technique with the appearance models associated with the 3D mesh models. The experimental results confirmed that the proposed framework achieves multimodal calibrations successfully in a single shot. The proposed method can be effectively applied for extrinsic calibration for plenoptic imaging systems of dozens of cameras in single-scan configuration without specific targets.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Targetless Multiple Camera-LiDAR Extrinsic Calibration using Object Pose Estimation\",\"authors\":\"Byung-Hyun Yoon, Hyeonwoo Jeong, Kang-Sun Choi\",\"doi\":\"10.1109/ICRA48506.2021.9560936\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a targetless method for calibrating the extrinsic parameters among multiple cameras and a LiDAR sensor using object pose estimation. Contrast to previous targetless methods requiring certain geometric features, the proposed method exploits any objects of unspecified shapes in the scene to estimate the calibration parameters in single-scan configuration. Semantic objects in the scene are initially segmented from each modal measurement. Using multiple images, a 3D point cloud is reconstructed up-to-scale. By registering the up-to-scale point cloud to the LiDAR point cloud, we achieve an initial calibration and find correspondences between point cloud segments and image object segments. For each point cloud segment, a 3D mesh model is reconstructed. Based on the correspondence information, the color appearance model for the mesh can be elaborately generated with corresponding object instance segment within the images. Starting from the initial calibration, the calibration is gradually refined by using an object pose estimation technique with the appearance models associated with the 3D mesh models. The experimental results confirmed that the proposed framework achieves multimodal calibrations successfully in a single shot. The proposed method can be effectively applied for extrinsic calibration for plenoptic imaging systems of dozens of cameras in single-scan configuration without specific targets.\",\"PeriodicalId\":108312,\"journal\":{\"name\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA48506.2021.9560936\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48506.2021.9560936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Targetless Multiple Camera-LiDAR Extrinsic Calibration using Object Pose Estimation
We propose a targetless method for calibrating the extrinsic parameters among multiple cameras and a LiDAR sensor using object pose estimation. Contrast to previous targetless methods requiring certain geometric features, the proposed method exploits any objects of unspecified shapes in the scene to estimate the calibration parameters in single-scan configuration. Semantic objects in the scene are initially segmented from each modal measurement. Using multiple images, a 3D point cloud is reconstructed up-to-scale. By registering the up-to-scale point cloud to the LiDAR point cloud, we achieve an initial calibration and find correspondences between point cloud segments and image object segments. For each point cloud segment, a 3D mesh model is reconstructed. Based on the correspondence information, the color appearance model for the mesh can be elaborately generated with corresponding object instance segment within the images. Starting from the initial calibration, the calibration is gradually refined by using an object pose estimation technique with the appearance models associated with the 3D mesh models. The experimental results confirmed that the proposed framework achieves multimodal calibrations successfully in a single shot. The proposed method can be effectively applied for extrinsic calibration for plenoptic imaging systems of dozens of cameras in single-scan configuration without specific targets.