DynaTM-SLAM: Fast filtering of dynamic feature points and object-based localization in dynamic indoor environments

IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Meiling Zhong , Chuyuan Hong , Zhaoqian Jia , Chunyu Wang , Zhiguo Wang
{"title":"DynaTM-SLAM: Fast filtering of dynamic feature points and object-based localization in dynamic indoor environments","authors":"Meiling Zhong ,&nbsp;Chuyuan Hong ,&nbsp;Zhaoqian Jia ,&nbsp;Chunyu Wang ,&nbsp;Zhiguo Wang","doi":"10.1016/j.robot.2024.104634","DOIUrl":null,"url":null,"abstract":"<div><p>Numerous advanced simultaneous localization and mapping (SLAM) algorithms have been developed due to the scientific and technological advancements. However, their practical applicability in complex real-world scenarios is severely limited by the assumption that objects are stationary. Improving the accuracy and robustness of SLAM algorithms in dynamic environments is therefore of paramount importance. A significant amount of research has been conducted on SLAM in dynamic environments using semantic segmentation or object detection, but a major drawback of these approaches is that they may eliminate static feature points if the movable objects are static, or use dynamic feature points if the static objects are moved. This paper proposed DynaTM-SLAM, a robust semantic visual SLAM algorithm, designed for dynamic environments. DynaTM-SLAM combines object detection and template matching techniques with a sliding window to quickly and efficiently filter out the real dynamic feature points, drastically minimizing the impact of dynamic objects. Our approach uses object detection instead of time-consuming semantic segmentation to detect dynamic objects. In addition, an object database is built online and the camera poses, map points, and objects are jointly optimized by implementing semantic constraints on the static objects. This approach fully exploits the positive effect of the semantic information of static objects and refines the accuracy of ego-motion estimation in dynamic environments. Experiments were carried out on the TUM RGBD dataset, and the results demonstrate a significant improvement in performance in dynamic scenes.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889024000174","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Numerous advanced simultaneous localization and mapping (SLAM) algorithms have been developed due to the scientific and technological advancements. However, their practical applicability in complex real-world scenarios is severely limited by the assumption that objects are stationary. Improving the accuracy and robustness of SLAM algorithms in dynamic environments is therefore of paramount importance. A significant amount of research has been conducted on SLAM in dynamic environments using semantic segmentation or object detection, but a major drawback of these approaches is that they may eliminate static feature points if the movable objects are static, or use dynamic feature points if the static objects are moved. This paper proposed DynaTM-SLAM, a robust semantic visual SLAM algorithm, designed for dynamic environments. DynaTM-SLAM combines object detection and template matching techniques with a sliding window to quickly and efficiently filter out the real dynamic feature points, drastically minimizing the impact of dynamic objects. Our approach uses object detection instead of time-consuming semantic segmentation to detect dynamic objects. In addition, an object database is built online and the camera poses, map points, and objects are jointly optimized by implementing semantic constraints on the static objects. This approach fully exploits the positive effect of the semantic information of static objects and refines the accuracy of ego-motion estimation in dynamic environments. Experiments were carried out on the TUM RGBD dataset, and the results demonstrate a significant improvement in performance in dynamic scenes.

DynaTM-SLAM:在动态室内环境中快速过滤动态特征点并进行基于对象的定位
随着科学技术的进步,人们开发出了许多先进的同步定位和绘图(SLAM)算法。然而,由于假设物体是静止的,这些算法在复杂现实世界场景中的实际应用受到严重限制。因此,提高 SLAM 算法在动态环境中的准确性和鲁棒性至关重要。利用语义分割或物体检测对动态环境中的 SLAM 进行了大量研究,但这些方法的一个主要缺点是,如果可移动物体是静态的,它们可能会消除静态特征点;如果静态物体是移动的,它们可能会使用动态特征点。本文提出的 DynaTM-SLAM 是一种专为动态环境设计的鲁棒语义视觉 SLAM 算法。DynaTM-SLAM 将物体检测和模板匹配技术与滑动窗口相结合,快速有效地过滤出真正的动态特征点,从而将动态物体的影响降到最低。我们的方法使用物体检测代替耗时的语义分割来检测动态物体。此外,我们还在线建立了一个对象数据库,并通过对静态对象实施语义约束来共同优化摄像机姿势、地图点和对象。这种方法充分利用了静态物体语义信息的积极作用,提高了动态环境中自我运动估计的准确性。实验在 TUM RGBD 数据集上进行,结果表明动态场景中的性能有了显著提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Robotics and Autonomous Systems
Robotics and Autonomous Systems 工程技术-机器人学
CiteScore
9.00
自引率
7.00%
发文量
164
审稿时长
4.5 months
期刊介绍: Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems. Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信