基于自注意机制的端到端视觉里程计

Rongchuan Cao, Yinan Wang, Kun Yan, Bo Chen, Tianqi Ding, Tianqi Zhang
{"title":"基于自注意机制的端到端视觉里程计","authors":"Rongchuan Cao, Yinan Wang, Kun Yan, Bo Chen, Tianqi Ding, Tianqi Zhang","doi":"10.1109/ICPICS55264.2022.9873538","DOIUrl":null,"url":null,"abstract":"To address the problem of capturing and expressing key features in existing methods, we design an end-to-end visual odometry algorithm using a self-attention mechanism. The algorithm consists of two parts: Visual Transformer network structure and Bidirectional Attention Long Short-Term Memory network. The former can extract visual features from video or image sequences, and the latter can mine the correlation between images captured on long trajectories. The algorithm can enhance the localization accuracy and robustness of visual odometry. The extensive experiments based on the KITTI benchmark demonstrate that the proposed algorithm works better than other outstanding algorithms.","PeriodicalId":257180,"journal":{"name":"2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS)","volume":"46 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An End-To-End Visual Odometry based on Self-Attention Mechanism\",\"authors\":\"Rongchuan Cao, Yinan Wang, Kun Yan, Bo Chen, Tianqi Ding, Tianqi Zhang\",\"doi\":\"10.1109/ICPICS55264.2022.9873538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To address the problem of capturing and expressing key features in existing methods, we design an end-to-end visual odometry algorithm using a self-attention mechanism. The algorithm consists of two parts: Visual Transformer network structure and Bidirectional Attention Long Short-Term Memory network. The former can extract visual features from video or image sequences, and the latter can mine the correlation between images captured on long trajectories. The algorithm can enhance the localization accuracy and robustness of visual odometry. The extensive experiments based on the KITTI benchmark demonstrate that the proposed algorithm works better than other outstanding algorithms.\",\"PeriodicalId\":257180,\"journal\":{\"name\":\"2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS)\",\"volume\":\"46 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPICS55264.2022.9873538\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 4th International Conference on Power, Intelligent Computing and Systems (ICPICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPICS55264.2022.9873538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

为了解决现有方法中捕获和表达关键特征的问题,我们设计了一种使用自关注机制的端到端视觉里程计算法。该算法由视觉变形网络结构和双向注意长短期记忆网络两部分组成。前者可以从视频或图像序列中提取视觉特征,后者可以挖掘在长轨迹上捕获的图像之间的相关性。该算法可以提高视觉里程计的定位精度和鲁棒性。基于KITTI基准的大量实验表明,该算法优于其他优秀算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An End-To-End Visual Odometry based on Self-Attention Mechanism
To address the problem of capturing and expressing key features in existing methods, we design an end-to-end visual odometry algorithm using a self-attention mechanism. The algorithm consists of two parts: Visual Transformer network structure and Bidirectional Attention Long Short-Term Memory network. The former can extract visual features from video or image sequences, and the latter can mine the correlation between images captured on long trajectories. The algorithm can enhance the localization accuracy and robustness of visual odometry. The extensive experiments based on the KITTI benchmark demonstrate that the proposed algorithm works better than other outstanding algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信