在具有挑战性的环境中实现准确,高效和稳健的RGB-D同步定位和绘图

IF 10.5 1区 计算机科学 Q1 ROBOTICS
Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo
{"title":"在具有挑战性的环境中实现准确,高效和稳健的RGB-D同步定位和绘图","authors":"Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo","doi":"10.1109/TRO.2025.3610173","DOIUrl":null,"url":null,"abstract":"Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"5720-5739"},"PeriodicalIF":10.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward Accurate, Efficient, and Robust RGB-D Simultaneous Localization and Mapping in Challenging Environments\",\"authors\":\"Hui Zhao;Fuqiang Gu;Jianga Shang;Xianlei Long;Jiarui Dou;Chao Chen;Huayan Pu;Jun Luo\",\"doi\":\"10.1109/TRO.2025.3610173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.\",\"PeriodicalId\":50388,\"journal\":{\"name\":\"IEEE Transactions on Robotics\",\"volume\":\"41 \",\"pages\":\"5720-5739\"},\"PeriodicalIF\":10.5000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11165034/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Robotics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11165034/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

摘要

视觉同步定位和地图(SLAM)对于自动驾驶汽车和机器人任务等许多应用至关重要。然而,现有的视觉SLAM方法在低纹理或光照变化的场景中实现良好的性能仍然具有挑战性。近年来,一些研究人员转向基于边缘的SLAM方法来处理具有挑战性的场景,这种方法比基于特征和直接的SLAM方法具有更强的鲁棒性。然而,现有的基于边缘的方法计算成本高,并且在精度方面不如其他视觉SLAM系统。在这项研究中,我们提出了EdgeSLAM,这是一种新颖的基于RGB-D边缘的SLAM方法,可以高效、准确和鲁棒地处理具有挑战性的场景。EdgeSLAM是建立在两个创新的模块:有效的边缘选择和自适应鲁棒运动估计。边缘选择模块可以在不牺牲精度的前提下,有效地选择一小组边缘像素,显著提高了计算效率。运动估计模块通过自适应处理运动估计中的异常值,提高了系统的精度和鲁棒性。在慕尼黑工业大学(TUM) RGBD、伦敦帝国理工学院(ICL)-爱尔兰国立大学Maynooth (NUIM)和苏黎世联邦理工学院3D重建(ETH3D)数据集上进行了大量实验,实验结果表明,EdgeSLAM在效率、准确性和鲁棒性方面明显优于五种最先进的方法。精度提高29.17%,处理速度高达120帧/秒,定位成功率高达97.06%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Toward Accurate, Efficient, and Robust RGB-D Simultaneous Localization and Mapping in Challenging Environments
Visual simultaneous localization and mapping (SLAM) is crucial to many applications such as self-driving vehicles and robot tasks. However, it is still challenging for existing visual SLAM approaches to achieve good performance in low-texture or illumination-changing scenes. In recent years, some researchers have turned to edge-based SLAM approaches to deal with the challenging scenes, which are more robust than feature-based and direct SLAM methods. Nevertheless, existing edge-based methods are computationally expensive and inferior than other visual SLAM systems in terms of accuracy. In this study, we propose EdgeSLAM, a novel RGB-D edge-based SLAM approach to deal with challenging scenarios that is efficient, accurate, and robust. EdgeSLAM is built on two innovative modules: efficient edge selection and adaptive robust motion estimation. The edge selection module can efficiently select a small set of edge pixels, which significantly improves the computational efficiency without sacrificing the accuracy. The motion estimation module improves the system’s accuracy and robustness by adaptively handling outliers in motion estimation. Extensive experiments were conducted on technical university of munich (TUM) RGBD, imperial college london (ICL)-National University of Ireland Maynooth (NUIM), and ETH zurich 3D reconstruction (ETH3D) datasets, and experimental results show that EdgeSLAM significantly outperforms five state-of-the-art methods in terms of efficiency, accuracy, and robustness, which achieves 29.17% accuracy improvements with a high processing speed of up to 120 frames/s and a high positioning success rate of 97.06%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Robotics
IEEE Transactions on Robotics 工程技术-机器人学
CiteScore
14.90
自引率
5.10%
发文量
259
审稿时长
6.0 months
期刊介绍: The IEEE Transactions on Robotics (T-RO) is dedicated to publishing fundamental papers covering all facets of robotics, drawing on interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, and beyond. From industrial applications to service and personal assistants, surgical operations to space, underwater, and remote exploration, robots and intelligent machines play pivotal roles across various domains, including entertainment, safety, search and rescue, military applications, agriculture, and intelligent vehicles. Special emphasis is placed on intelligent machines and systems designed for unstructured environments, where a significant portion of the environment remains unknown and beyond direct sensing or control.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信