Jaehyung Jung, Simon Boche, Sebastian Barbas Laina, Stefan Leutenegger
{"title":"具有体积占位映射的不确定性感知视觉惯性 SLAM","authors":"Jaehyung Jung, Simon Boche, Sebastian Barbas Laina, Stefan Leutenegger","doi":"arxiv-2409.12051","DOIUrl":null,"url":null,"abstract":"We propose visual-inertial simultaneous localization and mapping that tightly\ncouples sparse reprojection errors, inertial measurement unit pre-integrals,\nand relative pose factors with dense volumetric occupancy mapping. Hereby depth\npredictions from a deep neural network are fused in a fully probabilistic\nmanner. Specifically, our method is rigorously uncertainty-aware: first, we use\ndepth and uncertainty predictions from a deep network not only from the robot's\nstereo rig, but we further probabilistically fuse motion stereo that provides\ndepth information across a range of baselines, therefore drastically increasing\nmapping accuracy. Next, predicted and fused depth uncertainty propagates not\nonly into occupancy probabilities but also into alignment factors between\ngenerated dense submaps that enter the probabilistic nonlinear least squares\nestimator. This submap representation offers globally consistent geometry at\nscale. Our method is thoroughly evaluated in two benchmark datasets, resulting\nin localization and mapping accuracy that exceeds the state of the art, while\nsimultaneously offering volumetric occupancy directly usable for downstream\nrobotic planning and control in real-time.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uncertainty-Aware Visual-Inertial SLAM with Volumetric Occupancy Mapping\",\"authors\":\"Jaehyung Jung, Simon Boche, Sebastian Barbas Laina, Stefan Leutenegger\",\"doi\":\"arxiv-2409.12051\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose visual-inertial simultaneous localization and mapping that tightly\\ncouples sparse reprojection errors, inertial measurement unit pre-integrals,\\nand relative pose factors with dense volumetric occupancy mapping. Hereby depth\\npredictions from a deep neural network are fused in a fully probabilistic\\nmanner. Specifically, our method is rigorously uncertainty-aware: first, we use\\ndepth and uncertainty predictions from a deep network not only from the robot's\\nstereo rig, but we further probabilistically fuse motion stereo that provides\\ndepth information across a range of baselines, therefore drastically increasing\\nmapping accuracy. Next, predicted and fused depth uncertainty propagates not\\nonly into occupancy probabilities but also into alignment factors between\\ngenerated dense submaps that enter the probabilistic nonlinear least squares\\nestimator. This submap representation offers globally consistent geometry at\\nscale. Our method is thoroughly evaluated in two benchmark datasets, resulting\\nin localization and mapping accuracy that exceeds the state of the art, while\\nsimultaneously offering volumetric occupancy directly usable for downstream\\nrobotic planning and control in real-time.\",\"PeriodicalId\":501031,\"journal\":{\"name\":\"arXiv - CS - Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12051\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Uncertainty-Aware Visual-Inertial SLAM with Volumetric Occupancy Mapping
We propose visual-inertial simultaneous localization and mapping that tightly
couples sparse reprojection errors, inertial measurement unit pre-integrals,
and relative pose factors with dense volumetric occupancy mapping. Hereby depth
predictions from a deep neural network are fused in a fully probabilistic
manner. Specifically, our method is rigorously uncertainty-aware: first, we use
depth and uncertainty predictions from a deep network not only from the robot's
stereo rig, but we further probabilistically fuse motion stereo that provides
depth information across a range of baselines, therefore drastically increasing
mapping accuracy. Next, predicted and fused depth uncertainty propagates not
only into occupancy probabilities but also into alignment factors between
generated dense submaps that enter the probabilistic nonlinear least squares
estimator. This submap representation offers globally consistent geometry at
scale. Our method is thoroughly evaluated in two benchmark datasets, resulting
in localization and mapping accuracy that exceeds the state of the art, while
simultaneously offering volumetric occupancy directly usable for downstream
robotic planning and control in real-time.