基于惯性融合和变形图优化的精确鲁棒RGB-D密集映射

Yong Liu, Liming Bao, Chaofan Zhang, Wen Zhang, Yingwei Xia
{"title":"基于惯性融合和变形图优化的精确鲁棒RGB-D密集映射","authors":"Yong Liu, Liming Bao, Chaofan Zhang, Wen Zhang, Yingwei Xia","doi":"10.1109/ICTAI.2019.00249","DOIUrl":null,"url":null,"abstract":"RGB-D dense mapping has become more and more popular, however, when encountering rapid movement or shake, the robustness and accuracy of most RGB-D dense mapping methods are degraded and the generated maps are overlapped or distorted, due to the drift of pose estimation. In this paper, we present a novel RGB-D dense mapping method, which can obtain accurate, robust and global consistency map even in the above complex conditions. Firstly, the improved ORBSLAM method, which tightly-couples RGB-D information and inertial information to estimate the current pose of robot, is firstly introduced for accurate pose estimation rather than traditional frame-to-frame method in most RGB-D dense mapping methods. Besides, the TSDF (Truncated Signed Distance Function) method is used to effectively fuse depth frame into a global model, and to keep the global consistency of the generated map. Furthermore, since the drift error is inevitable, a deformation graph is constructed to minimize the consistent error in global model, to further improve the mapping performance. The performance of the proposed RGB-D dense mapping method was validated by extensive localization and mapping experiments on public datasets and real scene datasets, and it showed strongly accuracy and robustness over other state-of-the-art methods. What's more, the proposed method can achieve real-time performance implemented on GPU.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Accurate and Robust RGB-D Dense Mapping with Inertial Fusion and Deformation-Graph Optimization\",\"authors\":\"Yong Liu, Liming Bao, Chaofan Zhang, Wen Zhang, Yingwei Xia\",\"doi\":\"10.1109/ICTAI.2019.00249\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGB-D dense mapping has become more and more popular, however, when encountering rapid movement or shake, the robustness and accuracy of most RGB-D dense mapping methods are degraded and the generated maps are overlapped or distorted, due to the drift of pose estimation. In this paper, we present a novel RGB-D dense mapping method, which can obtain accurate, robust and global consistency map even in the above complex conditions. Firstly, the improved ORBSLAM method, which tightly-couples RGB-D information and inertial information to estimate the current pose of robot, is firstly introduced for accurate pose estimation rather than traditional frame-to-frame method in most RGB-D dense mapping methods. Besides, the TSDF (Truncated Signed Distance Function) method is used to effectively fuse depth frame into a global model, and to keep the global consistency of the generated map. Furthermore, since the drift error is inevitable, a deformation graph is constructed to minimize the consistent error in global model, to further improve the mapping performance. The performance of the proposed RGB-D dense mapping method was validated by extensive localization and mapping experiments on public datasets and real scene datasets, and it showed strongly accuracy and robustness over other state-of-the-art methods. What's more, the proposed method can achieve real-time performance implemented on GPU.\",\"PeriodicalId\":346657,\"journal\":{\"name\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2019.00249\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

RGB-D密集映射越来越受欢迎,然而,当遇到快速运动或震动时,由于姿态估计的漂移,大多数RGB-D密集映射方法的鲁棒性和精度下降,生成的地图重叠或扭曲。本文提出了一种新的RGB-D密集映射方法,即使在上述复杂条件下也能得到精确、鲁棒和全局一致性的映射。首先,引入改进的ORBSLAM方法,将RGB-D信息与惯性信息紧密耦合来估计机器人当前姿态,以准确估计姿态,而不是传统的RGB-D密集映射方法中的帧对帧方法。利用TSDF (Truncated Signed Distance Function)方法有效地将深度帧融合到全局模型中,保证了生成地图的全局一致性。此外,由于漂移误差是不可避免的,为了使全局模型的一致性误差最小化,构造了变形图,进一步提高了映射性能。在公共数据集和真实场景数据集上进行了大量的定位和映射实验,验证了所提出的RGB-D密集映射方法的性能,与其他最新方法相比,它具有很强的准确性和鲁棒性。该方法在GPU上实现了实时性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Accurate and Robust RGB-D Dense Mapping with Inertial Fusion and Deformation-Graph Optimization
RGB-D dense mapping has become more and more popular, however, when encountering rapid movement or shake, the robustness and accuracy of most RGB-D dense mapping methods are degraded and the generated maps are overlapped or distorted, due to the drift of pose estimation. In this paper, we present a novel RGB-D dense mapping method, which can obtain accurate, robust and global consistency map even in the above complex conditions. Firstly, the improved ORBSLAM method, which tightly-couples RGB-D information and inertial information to estimate the current pose of robot, is firstly introduced for accurate pose estimation rather than traditional frame-to-frame method in most RGB-D dense mapping methods. Besides, the TSDF (Truncated Signed Distance Function) method is used to effectively fuse depth frame into a global model, and to keep the global consistency of the generated map. Furthermore, since the drift error is inevitable, a deformation graph is constructed to minimize the consistent error in global model, to further improve the mapping performance. The performance of the proposed RGB-D dense mapping method was validated by extensive localization and mapping experiments on public datasets and real scene datasets, and it showed strongly accuracy and robustness over other state-of-the-art methods. What's more, the proposed method can achieve real-time performance implemented on GPU.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信