SCP-SLAM: Accelerating DynaSLAM With Static Confidence Propagation

Ming-Fei Yu, Lei Zhang, Wu-Fan Wang, Jiahui Wang
{"title":"SCP-SLAM: Accelerating DynaSLAM With Static Confidence Propagation","authors":"Ming-Fei Yu, Lei Zhang, Wu-Fan Wang, Jiahui Wang","doi":"10.1109/VR55154.2023.00066","DOIUrl":null,"url":null,"abstract":"DynaSLAM is the state-of-the-art visual simultaneous localization and mapping (SLAM) in dynamic environments. It adopts a convolutional neural network (CNN) for moving object detection, but usually incurs a very high computational cost because it performs semantic segmentation using the CNN model on every frame. This paper proposes SCP-SLAM, which accelerates DynaSLAM by running the CNN only on keyframes and propagating static confidence through other frames in parallel. The proposed static confidence characterizes the moving object features by the residual defined by inter-frame geometry transformation, which can be computed quickly. Our method combines the effectiveness of a CNN with the efficiency of static confidence in a tightly coupled manner. Extensive experiments on the publicly available TUM and Bonn RGB-D dynamic benchmark datasets demonstrate the efficacy of the method. Compared with DynaSLAM, it enables acceleration by a factor of ten on average, but retains comparable localization accuracy.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR55154.2023.00066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

DynaSLAM is the state-of-the-art visual simultaneous localization and mapping (SLAM) in dynamic environments. It adopts a convolutional neural network (CNN) for moving object detection, but usually incurs a very high computational cost because it performs semantic segmentation using the CNN model on every frame. This paper proposes SCP-SLAM, which accelerates DynaSLAM by running the CNN only on keyframes and propagating static confidence through other frames in parallel. The proposed static confidence characterizes the moving object features by the residual defined by inter-frame geometry transformation, which can be computed quickly. Our method combines the effectiveness of a CNN with the efficiency of static confidence in a tightly coupled manner. Extensive experiments on the publicly available TUM and Bonn RGB-D dynamic benchmark datasets demonstrate the efficacy of the method. Compared with DynaSLAM, it enables acceleration by a factor of ten on average, but retains comparable localization accuracy.
SCP-SLAM:用静态信心传播加速DynaSLAM
DynaSLAM是动态环境中最先进的视觉同步定位和绘图(SLAM)。它采用卷积神经网络(convolutional neural network, CNN)进行运动目标检测,但由于它是在每一帧上使用CNN模型进行语义分割,通常会产生非常高的计算成本。本文提出了SCP-SLAM算法,该算法通过只在关键帧上运行CNN,并在其他帧上并行传播静态置信度来加速DynaSLAM。所提出的静态置信度通过帧间几何变换定义的残差来表征运动目标的特征,可以快速计算得到。我们的方法以紧密耦合的方式结合了CNN的有效性和静态置信度的效率。在公开可用的TUM和波恩RGB-D动态基准数据集上进行的大量实验证明了该方法的有效性。与DynaSLAM相比,它可以实现平均十倍的加速,但仍保持相当的定位精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信