Shengyu Li , Shuolong Chen , Xingxing Li, Yuxuan Zhou, Shiwen Wang
{"title":"Accurate and automatic spatiotemporal calibration for multi-modal sensor system based on continuous-time optimization","authors":"Shengyu Li , Shuolong Chen , Xingxing Li, Yuxuan Zhou, Shiwen Wang","doi":"10.1016/j.inffus.2025.103071","DOIUrl":null,"url":null,"abstract":"<div><div>Current intelligent robotic applications, such as unmanned aerial vehicles (UAV) and autonomous driving, generally rely on multi-modal sensor fusion to continuously strive towards higher levels of autonomy. To achieve this goal, accurate and consistent inter-sensor spatiotemporal relationship is a fundamental prerequisite for fusing heterogeneous sensor information. Nevertheless, current calibration frameworks typically necessitate specialized tools or additional infrastructures, rendering them labor-intensive and only applicable to certain sensor combinations. To address this issue, we propose an accurate and easy-to-use spatiotemporal calibration framework tailored to current primary sensors, including inertial measurement unit (IMU), LiDAR, camera and Radar. This calibration framework can be seamlessly extended to other sensors that could independently recover ego-motion or ego-velocity, such as wheel odometry and GPS devices. A rigorous multistage initialization approach is first developed to obtain reasonable initial guesses of spatiotemporal parameters without relying on prior knowledge of environmental information or specialized movements. Leveraging the IMU-centric principle, the spatiotemporal parameters of other sensors relative to IMU can be jointly optimized and refined via continuous-time batch estimation without sharing the overlapping field-of-views (FoVs) among exteroceptive sensors. A comprehensive series of experiments is carried out to quantitatively evaluate the proposed method in both simulation and real-world scenarios. The results demonstrate that the proposed method could achieve comparable calibration accuracy against state-of-the-art target-based calibration methods and outperform targetless calibration methods in terms of consistency and repeatability.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103071"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001447","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Current intelligent robotic applications, such as unmanned aerial vehicles (UAV) and autonomous driving, generally rely on multi-modal sensor fusion to continuously strive towards higher levels of autonomy. To achieve this goal, accurate and consistent inter-sensor spatiotemporal relationship is a fundamental prerequisite for fusing heterogeneous sensor information. Nevertheless, current calibration frameworks typically necessitate specialized tools or additional infrastructures, rendering them labor-intensive and only applicable to certain sensor combinations. To address this issue, we propose an accurate and easy-to-use spatiotemporal calibration framework tailored to current primary sensors, including inertial measurement unit (IMU), LiDAR, camera and Radar. This calibration framework can be seamlessly extended to other sensors that could independently recover ego-motion or ego-velocity, such as wheel odometry and GPS devices. A rigorous multistage initialization approach is first developed to obtain reasonable initial guesses of spatiotemporal parameters without relying on prior knowledge of environmental information or specialized movements. Leveraging the IMU-centric principle, the spatiotemporal parameters of other sensors relative to IMU can be jointly optimized and refined via continuous-time batch estimation without sharing the overlapping field-of-views (FoVs) among exteroceptive sensors. A comprehensive series of experiments is carried out to quantitatively evaluate the proposed method in both simulation and real-world scenarios. The results demonstrate that the proposed method could achieve comparable calibration accuracy against state-of-the-art target-based calibration methods and outperform targetless calibration methods in terms of consistency and repeatability.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.