基于特征的视觉里程计的软硬件协同设计与FPGA加速

Chiang-Heng Chien, Chiang-Ju Chien, C. Hsu
{"title":"基于特征的视觉里程计的软硬件协同设计与FPGA加速","authors":"Chiang-Heng Chien, Chiang-Ju Chien, C. Hsu","doi":"10.1109/ICRAE48301.2019.9043811","DOIUrl":null,"url":null,"abstract":"In the field of visual odometry (VO) or SLAM, deriving camera poses from image features is the basic issue. Even though feature-based VO or SLAM are more efficient than non-feature-based methods, they are still unfortunately computationally demanding. This paper addresses the concerns of computational efficiency, computational resources and power-consumption problem of a VO algorithm by designing a hardware-software (HW/SW) co-design architecture for the implementation on a field-programmable gate array (FPGA) and a Nios II CPU. Given images from Nios II, features are extracted and matched by SIFT and linear exhausted search (LES) algorithms via hardware. The design of LES module is improved so that the speed is accelerated compared to our previous work. Subsequently, camera poses are estimated using an ICP algorithm, where the derivation of nearest orthogonal matrix is achieved by integrating Denman-Beavers (DB) approach and Taylor approximation method. As such, the required hardware resources are lesser. After hardware computations, the results are then transferred back to Nios II. To show the effectiveness of the proposed approach, experiments using KITTI dataset are conducted. The results show that, taking the advantages of efficient computation of hardware, the computational time is greatly reduced, compared to a full-software implementation. Moreover, usage of hardware resources are also lesser than existing methods.","PeriodicalId":270665,"journal":{"name":"2019 4th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"HW/SW Co-design and FPGA Acceleration of a Feature-Based Visual Odometry\",\"authors\":\"Chiang-Heng Chien, Chiang-Ju Chien, C. Hsu\",\"doi\":\"10.1109/ICRAE48301.2019.9043811\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of visual odometry (VO) or SLAM, deriving camera poses from image features is the basic issue. Even though feature-based VO or SLAM are more efficient than non-feature-based methods, they are still unfortunately computationally demanding. This paper addresses the concerns of computational efficiency, computational resources and power-consumption problem of a VO algorithm by designing a hardware-software (HW/SW) co-design architecture for the implementation on a field-programmable gate array (FPGA) and a Nios II CPU. Given images from Nios II, features are extracted and matched by SIFT and linear exhausted search (LES) algorithms via hardware. The design of LES module is improved so that the speed is accelerated compared to our previous work. Subsequently, camera poses are estimated using an ICP algorithm, where the derivation of nearest orthogonal matrix is achieved by integrating Denman-Beavers (DB) approach and Taylor approximation method. As such, the required hardware resources are lesser. After hardware computations, the results are then transferred back to Nios II. To show the effectiveness of the proposed approach, experiments using KITTI dataset are conducted. The results show that, taking the advantages of efficient computation of hardware, the computational time is greatly reduced, compared to a full-software implementation. Moreover, usage of hardware resources are also lesser than existing methods.\",\"PeriodicalId\":270665,\"journal\":{\"name\":\"2019 4th International Conference on Robotics and Automation Engineering (ICRAE)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 4th International Conference on Robotics and Automation Engineering (ICRAE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRAE48301.2019.9043811\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 4th International Conference on Robotics and Automation Engineering (ICRAE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRAE48301.2019.9043811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在视觉里程计(VO)或SLAM领域中,从图像特征中提取相机姿态是基本问题。尽管基于特征的VO或SLAM比非基于特征的方法更有效,但不幸的是,它们仍然需要大量的计算。本文通过设计一种在现场可编程门阵列(FPGA)和Nios II CPU上实现的硬件/软件协同设计架构,解决了VO算法的计算效率、计算资源和功耗问题。给定Nios II图像,通过硬件通过SIFT和线性耗尽搜索(LES)算法提取特征并进行匹配。改进了LES模块的设计,与之前的工作相比,速度加快了。随后,使用ICP算法估计相机姿态,其中通过集成Denman-Beavers (DB)方法和Taylor近似方法实现最近正交矩阵的推导。因此,所需的硬件资源较少。硬件计算完成后,结果再传回Nios II。为了验证该方法的有效性,利用KITTI数据集进行了实验。结果表明,与全软件实现相比,利用硬件高效计算的优势,大大减少了计算时间。此外,硬件资源的使用也比现有的方法少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
HW/SW Co-design and FPGA Acceleration of a Feature-Based Visual Odometry
In the field of visual odometry (VO) or SLAM, deriving camera poses from image features is the basic issue. Even though feature-based VO or SLAM are more efficient than non-feature-based methods, they are still unfortunately computationally demanding. This paper addresses the concerns of computational efficiency, computational resources and power-consumption problem of a VO algorithm by designing a hardware-software (HW/SW) co-design architecture for the implementation on a field-programmable gate array (FPGA) and a Nios II CPU. Given images from Nios II, features are extracted and matched by SIFT and linear exhausted search (LES) algorithms via hardware. The design of LES module is improved so that the speed is accelerated compared to our previous work. Subsequently, camera poses are estimated using an ICP algorithm, where the derivation of nearest orthogonal matrix is achieved by integrating Denman-Beavers (DB) approach and Taylor approximation method. As such, the required hardware resources are lesser. After hardware computations, the results are then transferred back to Nios II. To show the effectiveness of the proposed approach, experiments using KITTI dataset are conducted. The results show that, taking the advantages of efficient computation of hardware, the computational time is greatly reduced, compared to a full-software implementation. Moreover, usage of hardware resources are also lesser than existing methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信