Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo
{"title":"Toward Accurate, Efficient, and Robust RGB-D Simultaneous Localization and Mapping in Challenging Environments","authors":"Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo","doi":"10.1109/TRO.2025.3610173","DOIUrl":null,"url":null,"abstract":"Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5720-5739"},"PeriodicalIF":10.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11165034/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.
期刊介绍:
The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles.
Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.