VINS-FEN: Monocular Visual-Inertial SLAM Based on Feature Extraction Network

Ke Wang, Cheng Zhang, Di Su, Kai Sun, Tian Zhan
{"title":"VINS-FEN: Monocular Visual-Inertial SLAM Based on Feature Extraction Network","authors":"Ke Wang, Cheng Zhang, Di Su, Kai Sun, Tian Zhan","doi":"10.1109/CMVIT57620.2023.00025","DOIUrl":null,"url":null,"abstract":"Monocular visual-inertial simultaneous localization and mapping (SLAM) technology is able to be widely used to provide pose for unmanned aerial vehicles. It usually uses artificially designed feature points and descriptors as the feature and basis for image matching. However, it is easy to cause the problem of difficult feature extraction and feature matching error under uneven illumination and weak texture environment. In order to solve the above problems, this paper adopts the deep convolutional neural network (CNN) instead of traditional artificial design features to replace the traditional front end of visual-inertial system (VINS). My main work includes designing deep convolutional neural Network–Feature Extraction Network (FEN), for feature extraction, proposing a two-stage matching strategy, and porting the above improvements to the front end of VINS to form a complete system. Finally, verification is conducted on HPatches dataset and EuRoc dataset. The experimental results show that FEN is 3%~23% higher than the traditional method in repeatability and accuracy of extracting feature points. The VINS with FEN as the front end has stronger robustness and improves localization accuracy by 17.3% under uneven illumination and weak texture conditions.","PeriodicalId":191655,"journal":{"name":"2023 7th International Conference on Machine Vision and Information Technology (CMVIT)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Machine Vision and Information Technology (CMVIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CMVIT57620.2023.00025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Monocular visual-inertial simultaneous localization and mapping (SLAM) technology is able to be widely used to provide pose for unmanned aerial vehicles. It usually uses artificially designed feature points and descriptors as the feature and basis for image matching. However, it is easy to cause the problem of difficult feature extraction and feature matching error under uneven illumination and weak texture environment. In order to solve the above problems, this paper adopts the deep convolutional neural network (CNN) instead of traditional artificial design features to replace the traditional front end of visual-inertial system (VINS). My main work includes designing deep convolutional neural Network–Feature Extraction Network (FEN), for feature extraction, proposing a two-stage matching strategy, and porting the above improvements to the front end of VINS to form a complete system. Finally, verification is conducted on HPatches dataset and EuRoc dataset. The experimental results show that FEN is 3%~23% higher than the traditional method in repeatability and accuracy of extracting feature points. The VINS with FEN as the front end has stronger robustness and improves localization accuracy by 17.3% under uneven illumination and weak texture conditions.
vin - fen:基于特征提取网络的单目视觉惯性SLAM
单目视惯性同步定位与制图(SLAM)技术能够广泛应用于无人机的姿态提供。它通常使用人为设计的特征点和描述符作为图像匹配的特征和依据。然而,在光照不均匀和纹理弱的环境下,容易产生特征提取困难和特征匹配误差的问题。为了解决上述问题,本文采用深度卷积神经网络(CNN)代替传统的人工设计特征来代替传统的视觉惯性系统(VINS)前端。我的主要工作包括设计深度卷积神经网络特征提取网络(FEN),用于特征提取,提出一种两阶段匹配策略,并将上述改进移植到VINS前端,形成一个完整的系统。最后,在HPatches数据集和EuRoc数据集上进行验证。实验结果表明,该方法在特征点提取的重复性和准确性上比传统方法提高了3%~23%。在光照不均匀和纹理较弱的条件下,以FEN为前端的VINS具有较强的鲁棒性,定位精度提高了17.3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信