编码:一个深点云里程计网络

Yihuan Zhang, Liangbo Wang, Chen Fu, Yifan Dai, J. Dolan
{"title":"编码:一个深点云里程计网络","authors":"Yihuan Zhang, Liangbo Wang, Chen Fu, Yifan Dai, J. Dolan","doi":"10.1109/ICRA48506.2021.9562024","DOIUrl":null,"url":null,"abstract":"Ego-motion estimation is a key requirement for the simultaneous localization and mapping (SLAM) problem. The traditional pipeline goes through feature extraction, feature matching and pose estimation, whose performance depends on the manually designed features. In this paper, we are motivated by the strong performance of deep learning methods in other computer vision and robotics tasks. We replace hand-crafted features with a neural network and directly estimate the relative pose between two adjacent scans from a LiDAR sensor using ENCODE: a dEep poiNt Cloud ODometry nEtwork. Firstly, a spherical projection of the input point cloud is performed to acquire a multi-channel vertex map. Then a multi-layer network backbone is applied to learn the abstracted features and a fully connected layer is adopted to estimate the 6-DoF ego-motion. Additionally, a map-to-map optimization module is applied to update the local poses and output a smooth map. Experiments on multiple datasets demonstrate that the proposed method achieves the best performance in comparison to state-of-the-art methods and is capable of providing accurate poses with low drift in various kinds of scenarios.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ENCODE: a dEep poiNt Cloud ODometry nEtwork\",\"authors\":\"Yihuan Zhang, Liangbo Wang, Chen Fu, Yifan Dai, J. Dolan\",\"doi\":\"10.1109/ICRA48506.2021.9562024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ego-motion estimation is a key requirement for the simultaneous localization and mapping (SLAM) problem. The traditional pipeline goes through feature extraction, feature matching and pose estimation, whose performance depends on the manually designed features. In this paper, we are motivated by the strong performance of deep learning methods in other computer vision and robotics tasks. We replace hand-crafted features with a neural network and directly estimate the relative pose between two adjacent scans from a LiDAR sensor using ENCODE: a dEep poiNt Cloud ODometry nEtwork. Firstly, a spherical projection of the input point cloud is performed to acquire a multi-channel vertex map. Then a multi-layer network backbone is applied to learn the abstracted features and a fully connected layer is adopted to estimate the 6-DoF ego-motion. Additionally, a map-to-map optimization module is applied to update the local poses and output a smooth map. Experiments on multiple datasets demonstrate that the proposed method achieves the best performance in comparison to state-of-the-art methods and is capable of providing accurate poses with low drift in various kinds of scenarios.\",\"PeriodicalId\":108312,\"journal\":{\"name\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA48506.2021.9562024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48506.2021.9562024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自运动估计是同时定位与映射(SLAM)问题的一个关键要求。传统的管道通过特征提取、特征匹配和姿态估计,其性能取决于人工设计的特征。在本文中,我们受到深度学习方法在其他计算机视觉和机器人任务中的强大表现的激励。我们用神经网络代替手工制作的特征,并使用ENCODE: dEep poiNt Cloud ODometry network直接估计来自激光雷达传感器的两个相邻扫描之间的相对姿态。首先,对输入点云进行球面投影,得到多通道顶点图;然后采用多层网络骨干网学习抽象特征,采用全连通层估计六自由度自运动。此外,应用地图到地图优化模块更新局部姿势并输出平滑地图。在多个数据集上的实验表明,与现有的方法相比,该方法取得了最好的性能,能够在各种场景下提供精确的低漂移姿态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ENCODE: a dEep poiNt Cloud ODometry nEtwork
Ego-motion estimation is a key requirement for the simultaneous localization and mapping (SLAM) problem. The traditional pipeline goes through feature extraction, feature matching and pose estimation, whose performance depends on the manually designed features. In this paper, we are motivated by the strong performance of deep learning methods in other computer vision and robotics tasks. We replace hand-crafted features with a neural network and directly estimate the relative pose between two adjacent scans from a LiDAR sensor using ENCODE: a dEep poiNt Cloud ODometry nEtwork. Firstly, a spherical projection of the input point cloud is performed to acquire a multi-channel vertex map. Then a multi-layer network backbone is applied to learn the abstracted features and a fully connected layer is adopted to estimate the 6-DoF ego-motion. Additionally, a map-to-map optimization module is applied to update the local poses and output a smooth map. Experiments on multiple datasets demonstrate that the proposed method achieves the best performance in comparison to state-of-the-art methods and is capable of providing accurate poses with low drift in various kinds of scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信