{"title":"GPC-LIVO: Point-wise LiDAR-inertial-visual odometry with geometric and photometric composite measurement model","authors":"Chenxi Ye, Bingfei Nan","doi":"10.1016/j.robot.2024.104864","DOIUrl":null,"url":null,"abstract":"<div><div>In the pursuit of precision within Simultaneous Localization and Mapping (SLAM), multi-sensor fusion emerges as a validated strategy with vast potential in robotics applications. This work presents GPC-LIVO, an accurate and robust LiDAR-Inertial-Visual Odometry system that integrates geometric and photometric information into one composite measurement model with point-wise updating architecture. GPC-LIVO constructs a belief factor model to assign different weights on geometric and photometric observations in the measurement model and adopts an adaptive error-state Kalman filter state estimation back-end to dynamically estimate the covariance of two observations. Since LiDAR points have larger measurement errors at endpoints and edges, we only fuse photometric information for LiDAR planar features and propose a corresponding validation method based on the associated image plane. Comprehensive experimentation is conducted on GPC-LIVO, encompassing both publicly available data sequences and data collected from our bespoke hardware setup. The results conclusively establish the better performance of our proposed system compare to other state-of-art odometry frameworks, and demonstrate its ability to operate effectively in various challenging environmental conditions. GPC-LIVO outputs states estimation at a high frequency(1-5 kHz, varying based on the processed LiDAR points in a frame) and achieves comparable time consumption for real-time running.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104864"},"PeriodicalIF":4.3000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889024002483","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In the pursuit of precision within Simultaneous Localization and Mapping (SLAM), multi-sensor fusion emerges as a validated strategy with vast potential in robotics applications. This work presents GPC-LIVO, an accurate and robust LiDAR-Inertial-Visual Odometry system that integrates geometric and photometric information into one composite measurement model with point-wise updating architecture. GPC-LIVO constructs a belief factor model to assign different weights on geometric and photometric observations in the measurement model and adopts an adaptive error-state Kalman filter state estimation back-end to dynamically estimate the covariance of two observations. Since LiDAR points have larger measurement errors at endpoints and edges, we only fuse photometric information for LiDAR planar features and propose a corresponding validation method based on the associated image plane. Comprehensive experimentation is conducted on GPC-LIVO, encompassing both publicly available data sequences and data collected from our bespoke hardware setup. The results conclusively establish the better performance of our proposed system compare to other state-of-art odometry frameworks, and demonstrate its ability to operate effectively in various challenging environmental conditions. GPC-LIVO outputs states estimation at a high frequency(1-5 kHz, varying based on the processed LiDAR points in a frame) and achieves comparable time consumption for real-time running.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.