Enabling continuous planetary rover navigation through FPGA stereo and visual odometry

T. Howard, A. Morfopoulos, J. Morrison, Y. Kuwata, C. Villalpando, L. Matthies, M. McHenry
{"title":"Enabling continuous planetary rover navigation through FPGA stereo and visual odometry","authors":"T. Howard, A. Morfopoulos, J. Morrison, Y. Kuwata, C. Villalpando, L. Matthies, M. McHenry","doi":"10.1109/AERO.2012.6187041","DOIUrl":null,"url":null,"abstract":"Safe navigation under resource constraints is a key concern for autonomous planetary rovers operating on extraterrestrial bodies. Computational power in such applications is typically constrained by the radiation hardness and energy consumption requirements. For example, even though the microprocessors used for the Mars Science Laboratory (MSL) mission rover are an order of magnitude more powerful than those used for the rovers on the Mars Exploration Rovers (MER) mission, the computational power is still significantly less than that of contemporary desktop microprocessors. It is therefore important to move safely and efficiently through the environment while consuming a minimum amount of computational resources, energy and time. Perception, pose estimation, and motion planning are generally three of the most computationally expensive processes in modern autonomy navigation architectures. An example of this is on the MER where each rover must stop, acquire and process imagery to evaluate its surroundings, estimate the relative change in pose, and generate the next mobility system maneuver [1]. This paper describes improvements in the energy efficiency and speed of planetary rover autonomous traverse accomplished by converting processes typically performed by the CPU onto a Field Programmable Gate Arrays (FPGA) coprocessor. Perception algorithms in general are well suited to FPGA implementations because much of processing is naturally parallelizable. In this paper we present novel implementations of stereo and visual odometry algorithms on a FPGA. The FPGA stereo implementation is an extension of [2] that uses \"random in linear out\" rectification and a higher-performance interface between the rectification, filter, and disparity stages of the stereo pipeline. The improved visual odometry component utilizes a FPGA implementation of a Harris feature detector and sum of absolute differences (SAD) operator. The FPGA implementation of the stereo and visual odometry functionality have demonstrated a performance improvement of approximately three orders of magnitude compared to the MER-class avionics. These more efficient perception and pose estimation modules have been merged with motion planning techniques that allow for continuous steering and driving to navigate cluttered obstacle fields without stopping to perceive. The resulting faster visual odometry rates also allow for wheel slip to be detected earlier and more reliably. Predictions of resulting improvements in planetary rover energy efficiency and average traverse speeds are reported. In addition, field results are presented that compare the performance of autonomous navigation on the Athena planetary rover prototype using continuous steering or driving and continuous steering and driving with GESTALT traversability analysis using the FPGA perception and pose estimation improvements.","PeriodicalId":6421,"journal":{"name":"2012 IEEE Aerospace Conference","volume":"28 1","pages":"1-9"},"PeriodicalIF":0.0000,"publicationDate":"2012-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Aerospace Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AERO.2012.6187041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

Abstract

Safe navigation under resource constraints is a key concern for autonomous planetary rovers operating on extraterrestrial bodies. Computational power in such applications is typically constrained by the radiation hardness and energy consumption requirements. For example, even though the microprocessors used for the Mars Science Laboratory (MSL) mission rover are an order of magnitude more powerful than those used for the rovers on the Mars Exploration Rovers (MER) mission, the computational power is still significantly less than that of contemporary desktop microprocessors. It is therefore important to move safely and efficiently through the environment while consuming a minimum amount of computational resources, energy and time. Perception, pose estimation, and motion planning are generally three of the most computationally expensive processes in modern autonomy navigation architectures. An example of this is on the MER where each rover must stop, acquire and process imagery to evaluate its surroundings, estimate the relative change in pose, and generate the next mobility system maneuver [1]. This paper describes improvements in the energy efficiency and speed of planetary rover autonomous traverse accomplished by converting processes typically performed by the CPU onto a Field Programmable Gate Arrays (FPGA) coprocessor. Perception algorithms in general are well suited to FPGA implementations because much of processing is naturally parallelizable. In this paper we present novel implementations of stereo and visual odometry algorithms on a FPGA. The FPGA stereo implementation is an extension of [2] that uses "random in linear out" rectification and a higher-performance interface between the rectification, filter, and disparity stages of the stereo pipeline. The improved visual odometry component utilizes a FPGA implementation of a Harris feature detector and sum of absolute differences (SAD) operator. The FPGA implementation of the stereo and visual odometry functionality have demonstrated a performance improvement of approximately three orders of magnitude compared to the MER-class avionics. These more efficient perception and pose estimation modules have been merged with motion planning techniques that allow for continuous steering and driving to navigate cluttered obstacle fields without stopping to perceive. The resulting faster visual odometry rates also allow for wheel slip to be detected earlier and more reliably. Predictions of resulting improvements in planetary rover energy efficiency and average traverse speeds are reported. In addition, field results are presented that compare the performance of autonomous navigation on the Athena planetary rover prototype using continuous steering or driving and continuous steering and driving with GESTALT traversability analysis using the FPGA perception and pose estimation improvements.
通过FPGA立体和视觉里程计实现连续的行星探测器导航
资源约束下的安全导航是在地外天体上运行的自主行星探测器的关键问题。此类应用中的计算能力通常受到辐射硬度和能耗要求的限制。例如,尽管用于火星科学实验室(MSL)任务漫游者的微处理器比用于火星探测漫游者(MER)任务的漫游者的微处理器要强大一个数量级,但其计算能力仍然明显低于当代桌面微处理器。因此,安全有效地在环境中移动,同时消耗最少的计算资源、能量和时间是很重要的。感知、姿态估计和运动规划通常是现代自主导航体系结构中计算成本最高的三个过程。这方面的一个例子是在MER上,每个漫游者必须停下来,获取和处理图像来评估其周围环境,估计姿态的相对变化,并生成下一个移动系统机动[1]。本文描述了通过将通常由CPU执行的过程转换为现场可编程门阵列(FPGA)协处理器来完成的行星漫游车自主穿越的能效和速度的改进。一般来说,感知算法非常适合FPGA实现,因为许多处理是自然并行化的。在本文中,我们提出了在FPGA上实现立体和视觉里程计算法的新方法。FPGA立体声实现是对[2]的扩展,它使用“随机线性输出”整流和立体声管道的整流、滤波器和视差级之间的更高性能接口。改进的视觉里程计组件利用FPGA实现哈里斯特征检测器和绝对差和(SAD)算子。与mer级航空电子设备相比,立体和视觉里程计功能的FPGA实现显示了大约三个数量级的性能改进。这些更有效的感知和姿态估计模块已经与运动规划技术相结合,允许连续转向和驾驶在不停下来感知的情况下导航混乱的障碍场。由此产生的更快的视觉里程计速率也允许更早、更可靠地检测到车轮打滑。预测结果在行星漫游车的能源效率和平均穿越速度的改进报告。此外,还提供了现场结果,比较了雅典娜行星漫游车原型上使用连续转向或驾驶和使用FPGA感知和姿态估计改进的GESTALT可遍历性分析的自动导航性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信