{"title":"Joint Ego-motion Estimation Using a Laser Scanner and a Monocular Camera Through Relative Orientation Estimation and 1-DoF ICP","authors":"Kaihong Huang, C. Stachniss","doi":"10.1109/IROS.2018.8593965","DOIUrl":null,"url":null,"abstract":"Pose estimation and mapping are key capabilities of most autonomous vehicles and thus a number of localization and SLAM algorithms have been developed in the past. Autonomous robots and cars are typically equipped with multiple sensors. Often, the sensor suite includes a camera and a laser range finder. In this paper, we consider the problem of incremental ego-motion estimation, using both, a monocular camera and a laser range finder jointly. We propose a new algorithm, that exploits the advantages of both sensors-the ability of cameras to determine orientations well and the ability of laser range finders to estimate the scale and to directly obtain 3D point clouds. Our approach estimates the 5 degrees of freedom relative orientation from image pairs through feature point correspondences and formulates the remaining scale estimation as a new variant of the iterative closest point problem with only one degree of freedom. We furthermore exploit the camera information in a new way to constrain the data association between laser point clouds. The experiments presented in this paper suggest that our approach is able to accurately estimate the ego-motion of a vehicle and that we obtain more accurate frame-to-frame alignments than with one sensor modality alone.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"19 1","pages":"671-677"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2018.8593965","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Pose estimation and mapping are key capabilities of most autonomous vehicles and thus a number of localization and SLAM algorithms have been developed in the past. Autonomous robots and cars are typically equipped with multiple sensors. Often, the sensor suite includes a camera and a laser range finder. In this paper, we consider the problem of incremental ego-motion estimation, using both, a monocular camera and a laser range finder jointly. We propose a new algorithm, that exploits the advantages of both sensors-the ability of cameras to determine orientations well and the ability of laser range finders to estimate the scale and to directly obtain 3D point clouds. Our approach estimates the 5 degrees of freedom relative orientation from image pairs through feature point correspondences and formulates the remaining scale estimation as a new variant of the iterative closest point problem with only one degree of freedom. We furthermore exploit the camera information in a new way to constrain the data association between laser point clouds. The experiments presented in this paper suggest that our approach is able to accurately estimate the ego-motion of a vehicle and that we obtain more accurate frame-to-frame alignments than with one sensor modality alone.