{"title":"Ultra-Flow: An Ultra-fast and High-quality Optical Flow Accelerator with Deep Feature Matching on FPGA","authors":"Yehua Ling, Yuanxing Yan, Kai Huang, Gang Chen","doi":"10.1109/FPL57034.2022.00017","DOIUrl":null,"url":null,"abstract":"Dense and accurate optical flow estimation is an important requirement for dynamic scene perception in autonomous systems. However, most of the existing FPGA accelerators are based on classic methods, which cannot deal with large displacements of moving objects in ultra-fast scenes. In this paper, we present Ultra-Flow, an ultra-fast pipelined architecture for efficient optical flow estimation and refinement. Ultra-Flow utilizes binary neural networks to generate the robust feature map, on which hierarchical matching is directly performed. Therefore, multiple usages of neural networks at hierarchical levels can be avoided to achieve hardware efficiency in Ultra-Flow. Optimizations, including local flow regularization and enhanced matching, are further used to improve the throughput and refine the optical flow to obtain higher accuracy. Evaluation results show that, compared to state-of-the-art FPGA accelerators, Ultra-Flow achieves leading accuracy in the Middlebury sequences at ultra-fast processing speed up to 687.92 frames/s for 640 × 480 pixel images.","PeriodicalId":380116,"journal":{"name":"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 32nd International Conference on Field-Programmable Logic and Applications (FPL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FPL57034.2022.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Dense and accurate optical flow estimation is an important requirement for dynamic scene perception in autonomous systems. However, most of the existing FPGA accelerators are based on classic methods, which cannot deal with large displacements of moving objects in ultra-fast scenes. In this paper, we present Ultra-Flow, an ultra-fast pipelined architecture for efficient optical flow estimation and refinement. Ultra-Flow utilizes binary neural networks to generate the robust feature map, on which hierarchical matching is directly performed. Therefore, multiple usages of neural networks at hierarchical levels can be avoided to achieve hardware efficiency in Ultra-Flow. Optimizations, including local flow regularization and enhanced matching, are further used to improve the throughput and refine the optical flow to obtain higher accuracy. Evaluation results show that, compared to state-of-the-art FPGA accelerators, Ultra-Flow achieves leading accuracy in the Middlebury sequences at ultra-fast processing speed up to 687.92 frames/s for 640 × 480 pixel images.