Georges Younes , Douaa Khalil , John Zelek , Daniel Asmar
{"title":"H-SLAM: Hybrid direct–indirect visual SLAM","authors":"Georges Younes , Douaa Khalil , John Zelek , Daniel Asmar","doi":"10.1016/j.robot.2024.104729","DOIUrl":null,"url":null,"abstract":"<div><p>The recent success of hybrid methods in monocular odometry has led to many attempts to generalize the performance gains to hybrid monocular SLAM. However, most attempts fall short in several respects, with the most prominent issue being the need for two different map representations (local and global maps), with each requiring different, computationally expensive, and often redundant processes to maintain. Moreover, these maps tend to drift with respect to each other, resulting in contradicting pose and scene estimates, and leading to catastrophic failure. In this paper, we propose a novel approach that makes use of descriptor sharing to generate a single inverse depth scene representation. This representation can be used locally, queried globally to perform loop closure, and has the ability to re-activate previously observed map points after redundant points are marginalized from the local map, eliminating the need for separate map maintenance processes. The maps generated by our method exhibit no drift between each other, and can be computed at a fraction of the computational cost and memory footprint required by other monocular SLAM systems. Despite the reduced resource requirements, the proposed approach maintains its robustness and accuracy, delivering performance comparable to state-of-the-art SLAM methods (<em>e.g</em>., LDSO, ORB-SLAM3) on the majority of sequences from well-known datasets like EuRoC, KITTI, and TUM VI. The source code is available at: <span>https://github.com/AUBVRL/fslam_ros_docker</span><svg><path></path></svg>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"179 ","pages":"Article 104729"},"PeriodicalIF":4.3000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889024001131","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The recent success of hybrid methods in monocular odometry has led to many attempts to generalize the performance gains to hybrid monocular SLAM. However, most attempts fall short in several respects, with the most prominent issue being the need for two different map representations (local and global maps), with each requiring different, computationally expensive, and often redundant processes to maintain. Moreover, these maps tend to drift with respect to each other, resulting in contradicting pose and scene estimates, and leading to catastrophic failure. In this paper, we propose a novel approach that makes use of descriptor sharing to generate a single inverse depth scene representation. This representation can be used locally, queried globally to perform loop closure, and has the ability to re-activate previously observed map points after redundant points are marginalized from the local map, eliminating the need for separate map maintenance processes. The maps generated by our method exhibit no drift between each other, and can be computed at a fraction of the computational cost and memory footprint required by other monocular SLAM systems. Despite the reduced resource requirements, the proposed approach maintains its robustness and accuracy, delivering performance comparable to state-of-the-art SLAM methods (e.g., LDSO, ORB-SLAM3) on the majority of sequences from well-known datasets like EuRoC, KITTI, and TUM VI. The source code is available at: https://github.com/AUBVRL/fslam_ros_docker.
最近,混合方法在单目测距中取得了成功,因此很多人试图将其性能提升推广到混合单目SLAM中。然而,大多数尝试在几个方面都存在不足,其中最突出的问题是需要两种不同的地图表示(局部地图和全局地图),而每种地图都需要不同的、计算成本高昂且往往是多余的过程来维护。此外,这些地图往往会相互漂移,导致姿态和场景估计相互矛盾,从而导致灾难性故障。在本文中,我们提出了一种利用描述符共享生成单一反深度场景表示的新方法。这种表示法可在本地使用,也可在全局范围内查询以执行循环闭合,并能在本地地图中的冗余点被边缘化后重新激活先前观察到的地图点,从而无需单独的地图维护流程。我们的方法生成的地图不会相互漂移,而且计算成本和内存占用仅为其他单目 SLAM 系统的一小部分。尽管对资源的要求降低了,但所提出的方法仍保持了其鲁棒性和准确性,在 EuRoC、KITTI 和 TUM VI 等著名数据集的大多数序列上,其性能可与最先进的 SLAM 方法(如 LDSO、ORB-SLAM3)相媲美。源代码可在以下网址获取:https://github.com/AUBVRL/fslam_ros_docker。
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.