{"title":"不同视觉里程计架构改进ATDN vSLAM的性能比较","authors":"György Richárd Bogár, M. Szántó, Marton Szemenyei","doi":"10.1109/ELECS55825.2022.00035","DOIUrl":null,"url":null,"abstract":"In this paper an incremental development of an Optical Flow-based Visual Odometry Neural Network is introduced. This solution was developed as part of the ATDN vSLAM project which aims to solve the Visual Simultaneous Localization And Mapping (vSLAM) task using All-Through Deep Neural networks (ATDN). In our current work we propose a performance comparison study of odometry network structures. For input, we use optical flow data instead of RGB as the latter provides low level information in comparison with the former and we aim to build a SLAM framework with building blocks that accept and produce data on different abstraction levels. We present the main developments made in the network structure and network training technique since the ones described in the original ATDN vSLAM paper. These changes are made to encourage the odometry module to properly generalize and provide more precise estimations.","PeriodicalId":320259,"journal":{"name":"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance Comparison of Different Visual Odometry Architectures to Improve ATDN vSLAM\",\"authors\":\"György Richárd Bogár, M. Szántó, Marton Szemenyei\",\"doi\":\"10.1109/ELECS55825.2022.00035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper an incremental development of an Optical Flow-based Visual Odometry Neural Network is introduced. This solution was developed as part of the ATDN vSLAM project which aims to solve the Visual Simultaneous Localization And Mapping (vSLAM) task using All-Through Deep Neural networks (ATDN). In our current work we propose a performance comparison study of odometry network structures. For input, we use optical flow data instead of RGB as the latter provides low level information in comparison with the former and we aim to build a SLAM framework with building blocks that accept and produce data on different abstraction levels. We present the main developments made in the network structure and network training technique since the ones described in the original ATDN vSLAM paper. These changes are made to encourage the odometry module to properly generalize and provide more precise estimations.\",\"PeriodicalId\":320259,\"journal\":{\"name\":\"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ELECS55825.2022.00035\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th European Conference on Electrical Engineering & Computer Science (ELECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ELECS55825.2022.00035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance Comparison of Different Visual Odometry Architectures to Improve ATDN vSLAM
In this paper an incremental development of an Optical Flow-based Visual Odometry Neural Network is introduced. This solution was developed as part of the ATDN vSLAM project which aims to solve the Visual Simultaneous Localization And Mapping (vSLAM) task using All-Through Deep Neural networks (ATDN). In our current work we propose a performance comparison study of odometry network structures. For input, we use optical flow data instead of RGB as the latter provides low level information in comparison with the former and we aim to build a SLAM framework with building blocks that accept and produce data on different abstraction levels. We present the main developments made in the network structure and network training technique since the ones described in the original ATDN vSLAM paper. These changes are made to encourage the odometry module to properly generalize and provide more precise estimations.