A Hybrid Sparse-Dense Monocular SLAM System for Autonomous Driving

Louis Gallagher, V. Kumar, S. Yogamani, J. McDonald
{"title":"A Hybrid Sparse-Dense Monocular SLAM System for Autonomous Driving","authors":"Louis Gallagher, V. Kumar, S. Yogamani, J. McDonald","doi":"10.1109/ecmr50962.2021.9568797","DOIUrl":null,"url":null,"abstract":"In this paper, we present a system for incrementally reconstructing a dense 3D model of the geometry of an outdoor environment using a single monocular camera attached to a moving vehicle. Dense models provide a rich representation of the environment facilitating higher-level scene understanding, perception, and planning. Our system employs dense depth prediction with a hybrid mapping architecture combining state-of-the-art sparse features and dense fusion-based visual SLAM algorithms within an integrated framework. Our novel contributions include design of hybrid sparse-dense camera tracking and loop closure, and scale estimation improvements in dense depth prediction. We use the motion estimates from the sparse method to overcome the large and variable inter-frame displacement typical of outdoor vehicle scenarios. Our system then registers the live image with the dense model using whole-image alignment. This enables the fusion of the live frame and dense depth prediction into the model. Global consistency and alignment between the sparse and dense models are achieved by applying pose constraints from the sparse method directly within the deformation of the dense model. We provide qualitative and quantitative results for both trajectory estimation and surface reconstruction accuracy, demonstrating competitive performance on the KITTI dataset. Qualitative results of the proposed approach are illustrated in https://youtu.be/Pn2uaVqjskY. Source code for the project is publicly available at the following repository https://github.com/robotvisionmu/DenseMonoSLAM","PeriodicalId":200521,"journal":{"name":"2021 European Conference on Mobile Robots (ECMR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 European Conference on Mobile Robots (ECMR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ecmr50962.2021.9568797","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

In this paper, we present a system for incrementally reconstructing a dense 3D model of the geometry of an outdoor environment using a single monocular camera attached to a moving vehicle. Dense models provide a rich representation of the environment facilitating higher-level scene understanding, perception, and planning. Our system employs dense depth prediction with a hybrid mapping architecture combining state-of-the-art sparse features and dense fusion-based visual SLAM algorithms within an integrated framework. Our novel contributions include design of hybrid sparse-dense camera tracking and loop closure, and scale estimation improvements in dense depth prediction. We use the motion estimates from the sparse method to overcome the large and variable inter-frame displacement typical of outdoor vehicle scenarios. Our system then registers the live image with the dense model using whole-image alignment. This enables the fusion of the live frame and dense depth prediction into the model. Global consistency and alignment between the sparse and dense models are achieved by applying pose constraints from the sparse method directly within the deformation of the dense model. We provide qualitative and quantitative results for both trajectory estimation and surface reconstruction accuracy, demonstrating competitive performance on the KITTI dataset. Qualitative results of the proposed approach are illustrated in https://youtu.be/Pn2uaVqjskY. Source code for the project is publicly available at the following repository https://github.com/robotvisionmu/DenseMonoSLAM
一种用于自动驾驶的混合稀疏-密集单目SLAM系统
在本文中,我们提出了一种系统,用于使用附着在移动车辆上的单个单目摄像机逐步重建室外环境几何形状的密集3D模型。密集模型提供了环境的丰富表示,促进了更高层次的场景理解、感知和规划。我们的系统采用密集深度预测和混合映射架构,将最先进的稀疏特征和基于密集融合的视觉SLAM算法结合在一个集成框架内。我们的新贡献包括设计混合稀疏密集相机跟踪和闭环,以及密集深度预测的尺度估计改进。我们使用稀疏方法的运动估计来克服户外车辆场景中典型的大且可变的帧间位移。然后,我们的系统使用全图像对齐将实时图像与密集模型注册。这使得实时帧和密集深度预测融合到模型中。通过在密集模型的变形范围内直接应用稀疏方法的位姿约束,实现了稀疏模型和密集模型之间的全局一致性和对齐。我们提供了轨迹估计和表面重建精度的定性和定量结果,展示了在KITTI数据集上具有竞争力的性能。所提出方法的定性结果见https://youtu.be/Pn2uaVqjskY。该项目的源代码可从以下存储库https://github.com/robotvisionmu/DenseMonoSLAM公开获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信