基于多环境时空一致性优化的视觉惯性SLAM

IF 4.2 2区 计算机科学 Q2 ROBOTICS
Huayan Pu, Jun Luo, Gang Wang, Tao Huang, Lang Wu, Dengyu Xiao, Hongliang Liu, Jun Luo
{"title":"基于多环境时空一致性优化的视觉惯性SLAM","authors":"Huayan Pu,&nbsp;Jun Luo,&nbsp;Gang Wang,&nbsp;Tao Huang,&nbsp;Lang Wu,&nbsp;Dengyu Xiao,&nbsp;Hongliang Liu,&nbsp;Jun Luo","doi":"10.1002/rob.22487","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Currently, the majority of robots equipped with visual-based simultaneous mapping and localization (SLAM) systems exhibit good performance in static environments. However, practical scenarios often present dynamic objects, rendering the environment less than entirely “static.” Diverse dynamic objects within the environment pose substantial challenges to the precision of visual SLAM system. To address this challenge, we propose a real-time visual inertial SLAM system that extensively leverages objects within the environment. First, we reject regions corresponding to dynamic objects. Following this, geometric constraints are applied within the stationary object regions to elaborate the mask of static areas, thereby facilitating the extraction of more stable feature points. Second, static landmarks are constructed based on the static regions. A spatiotemporal factor graph is then created by combining the temporal information from the Inertial Measurement Unit (IMU) with the semantic information from the static landmarks. Finally, we perform a diverse set of validation experiments on the proposed system, encompassing challenging scenarios from publicly available benchmarks and the real world. Within these experimental scenarios, we compare with state-of-the-art approaches. More specifically, our system achieved a more than 40% accuracy improvement over baseline method in these data sets. The results demonstrate that our proposed method exhibits outstanding robustness and accuracy not only in complex dynamic environments but also in static environments.</p>\n </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"679-696"},"PeriodicalIF":4.2000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual Inertial SLAM Based on Spatiotemporal Consistency Optimization in Diverse Environments\",\"authors\":\"Huayan Pu,&nbsp;Jun Luo,&nbsp;Gang Wang,&nbsp;Tao Huang,&nbsp;Lang Wu,&nbsp;Dengyu Xiao,&nbsp;Hongliang Liu,&nbsp;Jun Luo\",\"doi\":\"10.1002/rob.22487\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Currently, the majority of robots equipped with visual-based simultaneous mapping and localization (SLAM) systems exhibit good performance in static environments. However, practical scenarios often present dynamic objects, rendering the environment less than entirely “static.” Diverse dynamic objects within the environment pose substantial challenges to the precision of visual SLAM system. To address this challenge, we propose a real-time visual inertial SLAM system that extensively leverages objects within the environment. First, we reject regions corresponding to dynamic objects. Following this, geometric constraints are applied within the stationary object regions to elaborate the mask of static areas, thereby facilitating the extraction of more stable feature points. Second, static landmarks are constructed based on the static regions. A spatiotemporal factor graph is then created by combining the temporal information from the Inertial Measurement Unit (IMU) with the semantic information from the static landmarks. Finally, we perform a diverse set of validation experiments on the proposed system, encompassing challenging scenarios from publicly available benchmarks and the real world. Within these experimental scenarios, we compare with state-of-the-art approaches. More specifically, our system achieved a more than 40% accuracy improvement over baseline method in these data sets. The results demonstrate that our proposed method exhibits outstanding robustness and accuracy not only in complex dynamic environments but also in static environments.</p>\\n </div>\",\"PeriodicalId\":192,\"journal\":{\"name\":\"Journal of Field Robotics\",\"volume\":\"42 3\",\"pages\":\"679-696\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Field Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/rob.22487\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Field Robotics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rob.22487","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

摘要

目前,大多数配备了基于视觉的同步映射和定位(SLAM)系统的机器人在静态环境中表现出良好的性能。然而,实际场景通常呈现动态对象,使环境不完全是“静态的”。环境中动态物体的多样性对视觉SLAM系统的精度提出了很大的挑战。为了应对这一挑战,我们提出了一种实时视觉惯性SLAM系统,该系统可以广泛地利用环境中的物体。首先,我们拒绝对应于动态对象的区域。然后,在静止物体区域内应用几何约束,对静止区域进行掩模细化,从而便于提取更稳定的特征点。其次,基于静态区域构建静态地标。然后,将惯性测量单元(IMU)的时间信息与静态地标的语义信息相结合,创建时空因子图。最后,我们在提议的系统上执行了一组不同的验证实验,包括来自公开可用基准测试和现实世界的具有挑战性的场景。在这些实验场景中,我们与最先进的方法进行了比较。更具体地说,在这些数据集中,我们的系统比基线方法的准确率提高了40%以上。结果表明,该方法不仅在复杂的动态环境中,而且在静态环境中都具有良好的鲁棒性和准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Visual Inertial SLAM Based on Spatiotemporal Consistency Optimization in Diverse Environments

Currently, the majority of robots equipped with visual-based simultaneous mapping and localization (SLAM) systems exhibit good performance in static environments. However, practical scenarios often present dynamic objects, rendering the environment less than entirely “static.” Diverse dynamic objects within the environment pose substantial challenges to the precision of visual SLAM system. To address this challenge, we propose a real-time visual inertial SLAM system that extensively leverages objects within the environment. First, we reject regions corresponding to dynamic objects. Following this, geometric constraints are applied within the stationary object regions to elaborate the mask of static areas, thereby facilitating the extraction of more stable feature points. Second, static landmarks are constructed based on the static regions. A spatiotemporal factor graph is then created by combining the temporal information from the Inertial Measurement Unit (IMU) with the semantic information from the static landmarks. Finally, we perform a diverse set of validation experiments on the proposed system, encompassing challenging scenarios from publicly available benchmarks and the real world. Within these experimental scenarios, we compare with state-of-the-art approaches. More specifically, our system achieved a more than 40% accuracy improvement over baseline method in these data sets. The results demonstrate that our proposed method exhibits outstanding robustness and accuracy not only in complex dynamic environments but also in static environments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Field Robotics
Journal of Field Robotics 工程技术-机器人学
CiteScore
15.00
自引率
3.60%
发文量
80
审稿时长
6 months
期刊介绍: The Journal of Field Robotics seeks to promote scholarly publications dealing with the fundamentals of robotics in unstructured and dynamic environments. The Journal focuses on experimental robotics and encourages publication of work that has both theoretical and practical significance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信