不同视觉里程计架构改进ATDN vSLAM的性能比较

György Richárd Bogár, M. Szántó, Marton Szemenyei
{"title":"不同视觉里程计架构改进ATDN vSLAM的性能比较","authors":"György Richárd Bogár, M. Szántó, Marton Szemenyei","doi":"10.1109/ELECS55825.2022.00035","DOIUrl":null,"url":null,"abstract":"In this paper an incremental development of an Optical Flow-based Visual Odometry Neural Network is introduced. This solution was developed as part of the ATDN vSLAM project which aims to solve the Visual Simultaneous Localization And Mapping (vSLAM) task using All-Through Deep Neural networks (ATDN). In our current work we propose a performance comparison study of odometry network structures. For input, we use optical flow data instead of RGB as the latter provides low level information in comparison with the former and we aim to build a SLAM framework with building blocks that accept and produce data on different abstraction levels. We present the main developments made in the network structure and network training technique since the ones described in the original ATDN vSLAM paper. These changes are made to encourage the odometry module to properly generalize and provide more precise estimations.","PeriodicalId":320259,"journal":{"name":"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance Comparison of Different Visual Odometry Architectures to Improve ATDN vSLAM\",\"authors\":\"György Richárd Bogár, M. Szántó, Marton Szemenyei\",\"doi\":\"10.1109/ELECS55825.2022.00035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper an incremental development of an Optical Flow-based Visual Odometry Neural Network is introduced. This solution was developed as part of the ATDN vSLAM project which aims to solve the Visual Simultaneous Localization And Mapping (vSLAM) task using All-Through Deep Neural networks (ATDN). In our current work we propose a performance comparison study of odometry network structures. For input, we use optical flow data instead of RGB as the latter provides low level information in comparison with the former and we aim to build a SLAM framework with building blocks that accept and produce data on different abstraction levels. We present the main developments made in the network structure and network training technique since the ones described in the original ATDN vSLAM paper. These changes are made to encourage the odometry module to properly generalize and provide more precise estimations.\",\"PeriodicalId\":320259,\"journal\":{\"name\":\"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ELECS55825.2022.00035\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ELECS55825.2022.00035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了一种基于光流的视觉里程测量神经网络的逐步发展。该解决方案是作为ATDN vSLAM项目的一部分开发的,该项目旨在使用全通深度神经网络(ATDN)解决视觉同步定位和映射(vSLAM)任务。在我们目前的工作中,我们提出了里程计网络结构的性能比较研究。对于输入,我们使用光流数据而不是RGB,因为与前者相比,后者提供的信息层次较低,我们的目标是构建一个SLAM框架,构建块可以接受和产生不同抽象层次的数据。我们介绍了自原始ATDN vSLAM论文中描述的网络结构和网络训练技术以来在网络结构和网络训练技术方面取得的主要进展。这些更改是为了鼓励里程计模块适当地泛化并提供更精确的估计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Performance Comparison of Different Visual Odometry Architectures to Improve ATDN vSLAM
In this paper an incremental development of an Optical Flow-based Visual Odometry Neural Network is introduced. This solution was developed as part of the ATDN vSLAM project which aims to solve the Visual Simultaneous Localization And Mapping (vSLAM) task using All-Through Deep Neural networks (ATDN). In our current work we propose a performance comparison study of odometry network structures. For input, we use optical flow data instead of RGB as the latter provides low level information in comparison with the former and we aim to build a SLAM framework with building blocks that accept and produce data on different abstraction levels. We present the main developments made in the network structure and network training technique since the ones described in the original ATDN vSLAM paper. These changes are made to encourage the odometry module to properly generalize and provide more precise estimations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信