{"title":"iKalibr-RGBD: Partially-Specialized Target-Free Visual-Inertial Spatiotemporal Calibration For RGBDs via Continuous-Time Velocity Estimation","authors":"Shuolong Chen, Xingxing Li, Shengyu Li, Yuxuan Zhou","doi":"arxiv-2409.07116","DOIUrl":null,"url":null,"abstract":"Visual-inertial systems have been widely studied and applied in the last two\ndecades, mainly due to their low cost and power consumption, small footprint,\nand high availability. Such a trend simultaneously leads to a large amount of\nvisual-inertial calibration methods being presented, as accurate spatiotemporal\nparameters between sensors are a prerequisite for visual-inertial fusion. In\nour previous work, i.e., iKalibr, a continuous-time-based visual-inertial\ncalibration method was proposed as a part of one-shot multi-sensor resilient\nspatiotemporal calibration. While requiring no artificial target brings\nconsiderable convenience, computationally expensive pose estimation is demanded\nin initialization and batch optimization, limiting its availability.\nFortunately, this could be vastly improved for the RGBDs with additional depth\ninformation, by employing mapping-free ego-velocity estimation instead of\nmapping-based pose estimation. In this paper, we present the continuous-time\nego-velocity estimation-based RGBD-inertial spatiotemporal calibration, termed\nas iKalibr-RGBD, which is also targetless but computationally efficient. The\ngeneral pipeline of iKalibr-RGBD is inherited from iKalibr, composed of a\nrigorous initialization procedure and several continuous-time batch\noptimizations. The implementation of iKalibr-RGBD is open-sourced at\n(https://github.com/Unsigned-Long/iKalibr) to benefit the research community.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Visual-inertial systems have been widely studied and applied in the last two
decades, mainly due to their low cost and power consumption, small footprint,
and high availability. Such a trend simultaneously leads to a large amount of
visual-inertial calibration methods being presented, as accurate spatiotemporal
parameters between sensors are a prerequisite for visual-inertial fusion. In
our previous work, i.e., iKalibr, a continuous-time-based visual-inertial
calibration method was proposed as a part of one-shot multi-sensor resilient
spatiotemporal calibration. While requiring no artificial target brings
considerable convenience, computationally expensive pose estimation is demanded
in initialization and batch optimization, limiting its availability.
Fortunately, this could be vastly improved for the RGBDs with additional depth
information, by employing mapping-free ego-velocity estimation instead of
mapping-based pose estimation. In this paper, we present the continuous-time
ego-velocity estimation-based RGBD-inertial spatiotemporal calibration, termed
as iKalibr-RGBD, which is also targetless but computationally efficient. The
general pipeline of iKalibr-RGBD is inherited from iKalibr, composed of a
rigorous initialization procedure and several continuous-time batch
optimizations. The implementation of iKalibr-RGBD is open-sourced at
(https://github.com/Unsigned-Long/iKalibr) to benefit the research community.