{"title":"基于运动的相机-激光雷达在线标定与3D相机地面","authors":"Dongkyu Lee;Eun-Jung Bong;Seok-Cheol Kee","doi":"10.1109/TIV.2024.3451058","DOIUrl":null,"url":null,"abstract":"This paper addresses a novel camera-LiDAR online targetless calibration method. As the installation of heterogeneous sensors in autonomous vehicles is increasing, the importance of sensor calibration is also growing. An automatic sensor calibration function is essential for safe autonomous driving, responding to the sensor's geometric change. In this paper, background knowledge of sensor calibration is explained, and related papers are referenced. Depth estimation and semantic segmentation models are learned to detect significant regions from the camera using open and custom datasets. This paper describes the online calibration procedures based on motion-based calibration between the camera and LiDAR. Motion-based calibration faces a challenge for lack of roll and pitch variation, and our method overcomes this challenge by estimating a camera 3D ground plane from semantic segmentation and depth estimation and simultaneously detecting a 3D LiDAR ground plane. LiDAR-to-camera roll, pitch, and z values are then extracted using the camera and LiDAR ground plane, and the motion-based calibration problem is solved and optimized using those values as constraints. Experimental results conducted on urban roads and the proving ground C-track using our autonomous vehicles showed that the proposed method is quantitatively and qualitatively improving the existing method.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3278-3290"},"PeriodicalIF":14.3000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Motion-Based Camera-LiDAR Online Calibration With 3D Camera Ground\",\"authors\":\"Dongkyu Lee;Eun-Jung Bong;Seok-Cheol Kee\",\"doi\":\"10.1109/TIV.2024.3451058\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper addresses a novel camera-LiDAR online targetless calibration method. As the installation of heterogeneous sensors in autonomous vehicles is increasing, the importance of sensor calibration is also growing. An automatic sensor calibration function is essential for safe autonomous driving, responding to the sensor's geometric change. In this paper, background knowledge of sensor calibration is explained, and related papers are referenced. Depth estimation and semantic segmentation models are learned to detect significant regions from the camera using open and custom datasets. This paper describes the online calibration procedures based on motion-based calibration between the camera and LiDAR. Motion-based calibration faces a challenge for lack of roll and pitch variation, and our method overcomes this challenge by estimating a camera 3D ground plane from semantic segmentation and depth estimation and simultaneously detecting a 3D LiDAR ground plane. LiDAR-to-camera roll, pitch, and z values are then extracted using the camera and LiDAR ground plane, and the motion-based calibration problem is solved and optimized using those values as constraints. Experimental results conducted on urban roads and the proving ground C-track using our autonomous vehicles showed that the proposed method is quantitatively and qualitatively improving the existing method.\",\"PeriodicalId\":36532,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Vehicles\",\"volume\":\"10 5\",\"pages\":\"3278-3290\"},\"PeriodicalIF\":14.3000,\"publicationDate\":\"2024-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Vehicles\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10654524/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10654524/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Motion-Based Camera-LiDAR Online Calibration With 3D Camera Ground
This paper addresses a novel camera-LiDAR online targetless calibration method. As the installation of heterogeneous sensors in autonomous vehicles is increasing, the importance of sensor calibration is also growing. An automatic sensor calibration function is essential for safe autonomous driving, responding to the sensor's geometric change. In this paper, background knowledge of sensor calibration is explained, and related papers are referenced. Depth estimation and semantic segmentation models are learned to detect significant regions from the camera using open and custom datasets. This paper describes the online calibration procedures based on motion-based calibration between the camera and LiDAR. Motion-based calibration faces a challenge for lack of roll and pitch variation, and our method overcomes this challenge by estimating a camera 3D ground plane from semantic segmentation and depth estimation and simultaneously detecting a 3D LiDAR ground plane. LiDAR-to-camera roll, pitch, and z values are then extracted using the camera and LiDAR ground plane, and the motion-based calibration problem is solved and optimized using those values as constraints. Experimental results conducted on urban roads and the proving ground C-track using our autonomous vehicles showed that the proposed method is quantitatively and qualitatively improving the existing method.
期刊介绍:
The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges.
Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.