Ultra-Flow:基于FPGA的深度特征匹配超快速高质量光流加速器

Yehua Ling, Yuanxing Yan, Kai Huang, Gang Chen
{"title":"Ultra-Flow:基于FPGA的深度特征匹配超快速高质量光流加速器","authors":"Yehua Ling, Yuanxing Yan, Kai Huang, Gang Chen","doi":"10.1109/FPL57034.2022.00017","DOIUrl":null,"url":null,"abstract":"Dense and accurate optical flow estimation is an important requirement for dynamic scene perception in autonomous systems. However, most of the existing FPGA accelerators are based on classic methods, which cannot deal with large displacements of moving objects in ultra-fast scenes. In this paper, we present Ultra-Flow, an ultra-fast pipelined architecture for efficient optical flow estimation and refinement. Ultra-Flow utilizes binary neural networks to generate the robust feature map, on which hierarchical matching is directly performed. Therefore, multiple usages of neural networks at hierarchical levels can be avoided to achieve hardware efficiency in Ultra-Flow. Optimizations, including local flow regularization and enhanced matching, are further used to improve the throughput and refine the optical flow to obtain higher accuracy. Evaluation results show that, compared to state-of-the-art FPGA accelerators, Ultra-Flow achieves leading accuracy in the Middlebury sequences at ultra-fast processing speed up to 687.92 frames/s for 640 × 480 pixel images.","PeriodicalId":380116,"journal":{"name":"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ultra-Flow: An Ultra-fast and High-quality Optical Flow Accelerator with Deep Feature Matching on FPGA\",\"authors\":\"Yehua Ling, Yuanxing Yan, Kai Huang, Gang Chen\",\"doi\":\"10.1109/FPL57034.2022.00017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dense and accurate optical flow estimation is an important requirement for dynamic scene perception in autonomous systems. However, most of the existing FPGA accelerators are based on classic methods, which cannot deal with large displacements of moving objects in ultra-fast scenes. In this paper, we present Ultra-Flow, an ultra-fast pipelined architecture for efficient optical flow estimation and refinement. Ultra-Flow utilizes binary neural networks to generate the robust feature map, on which hierarchical matching is directly performed. Therefore, multiple usages of neural networks at hierarchical levels can be avoided to achieve hardware efficiency in Ultra-Flow. Optimizations, including local flow regularization and enhanced matching, are further used to improve the throughput and refine the optical flow to obtain higher accuracy. Evaluation results show that, compared to state-of-the-art FPGA accelerators, Ultra-Flow achieves leading accuracy in the Middlebury sequences at ultra-fast processing speed up to 687.92 frames/s for 640 × 480 pixel images.\",\"PeriodicalId\":380116,\"journal\":{\"name\":\"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)\",\"volume\":\"111 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FPL57034.2022.00017\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FPL57034.2022.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

密集、准确的光流估计是自主系统动态场景感知的重要要求。然而,现有的FPGA加速器大多基于经典方法,无法处理超快场景中运动物体的大位移。在本文中,我们提出了Ultra-Flow,一个超快速的流水线架构,用于有效的光流估计和细化。Ultra-Flow利用二值神经网络生成鲁棒特征映射,并在其上直接进行分层匹配。因此,在Ultra-Flow中可以避免神经网络在分层层次上的多次使用,以达到硬件效率。进一步采用局部流正则化和增强匹配等优化方法提高吞吐量,细化光流以获得更高的精度。评估结果表明,与最先进的FPGA加速器相比,Ultra-Flow在Middlebury序列中实现了领先的精度,对于640 × 480像素的图像,其超快处理速度高达687.92帧/秒。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ultra-Flow: An Ultra-fast and High-quality Optical Flow Accelerator with Deep Feature Matching on FPGA
Dense and accurate optical flow estimation is an important requirement for dynamic scene perception in autonomous systems. However, most of the existing FPGA accelerators are based on classic methods, which cannot deal with large displacements of moving objects in ultra-fast scenes. In this paper, we present Ultra-Flow, an ultra-fast pipelined architecture for efficient optical flow estimation and refinement. Ultra-Flow utilizes binary neural networks to generate the robust feature map, on which hierarchical matching is directly performed. Therefore, multiple usages of neural networks at hierarchical levels can be avoided to achieve hardware efficiency in Ultra-Flow. Optimizations, including local flow regularization and enhanced matching, are further used to improve the throughput and refine the optical flow to obtain higher accuracy. Evaluation results show that, compared to state-of-the-art FPGA accelerators, Ultra-Flow achieves leading accuracy in the Middlebury sequences at ultra-fast processing speed up to 687.92 frames/s for 640 × 480 pixel images.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信