Xijie Jia, Yu Zhang, Guangdong Liu, Xinlin Yang, Tianyu Zhang, Jia Zheng, Dongdong Xu, Zhuohuan Liu, Mengke Liu, Xiaoyang Yan, Hong Wang, Rongzhang Zheng, Li Wang, Dong Li, Satyaprakash Pareek, Jian Weng, Lu Tian, Dongliang Xie, Hong Luo, Yi Shan
{"title":"xvppu:基于AI引擎的通用平台上的高性能CNN加速器","authors":"Xijie Jia, Yu Zhang, Guangdong Liu, Xinlin Yang, Tianyu Zhang, Jia Zheng, Dongdong Xu, Zhuohuan Liu, Mengke Liu, Xiaoyang Yan, Hong Wang, Rongzhang Zheng, Li Wang, Dong Li, Satyaprakash Pareek, Jian Weng, Lu Tian, Dongliang Xie, Hong Luo, Yi Shan","doi":"10.1145/3617836","DOIUrl":null,"url":null,"abstract":"Nowadays, convolution neural networks (CNNs) are widely used in computer vision applications. However, the trends of higher accuracy and higher resolution generate larger networks. The requirements of computation or I/O are the key bottlenecks. In this paper, we propose XVDPU: the AI-Engine (AIE)-based CNN accelerator on Versal chips to meet heavy computation requirements. To resolve IO bottleneck, we adopt several techniques to improve data-reuse and reduce I/O requirements. An Arithmetic Logic Unit (ALU) is further proposed which can better balance resource utilization, new feature support, and efficiency of the whole system. We have successfully deployed more than 100 CNN models with our accelerator. Our experimental results show that the 96-AIE-core implementation can achieve 1653 frames per second (FPS) for ResNet50 on VCK190, which is 9.8 × faster than the design on ZCU102 running at 168.5 FPS. The 256-AIE-core implementation can further achieve 4050 FPS. We propose a tilling strategy to achieve feature-map-stationary (FMS) for high-definition CNN (HD-CNN) with the accelerator, achieving 3.8 × FPS improvement on Residual Channel Attention Network (RCAN) and 3.1 × on Super-Efficient Super-Resolution (SESR). This accelerator can also solve the 3D convolution task in disparity estimation, achieving end-to-end (E2E) performance of 10.1FPS with all the optimizations.","PeriodicalId":49248,"journal":{"name":"ACM Transactions on Reconfigurable Technology and Systems","volume":"170 1","pages":"0"},"PeriodicalIF":3.1000,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"XVDPU: A High Performance CNN Accelerator on Versal Platform Powered by AI Engine\",\"authors\":\"Xijie Jia, Yu Zhang, Guangdong Liu, Xinlin Yang, Tianyu Zhang, Jia Zheng, Dongdong Xu, Zhuohuan Liu, Mengke Liu, Xiaoyang Yan, Hong Wang, Rongzhang Zheng, Li Wang, Dong Li, Satyaprakash Pareek, Jian Weng, Lu Tian, Dongliang Xie, Hong Luo, Yi Shan\",\"doi\":\"10.1145/3617836\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, convolution neural networks (CNNs) are widely used in computer vision applications. However, the trends of higher accuracy and higher resolution generate larger networks. The requirements of computation or I/O are the key bottlenecks. In this paper, we propose XVDPU: the AI-Engine (AIE)-based CNN accelerator on Versal chips to meet heavy computation requirements. To resolve IO bottleneck, we adopt several techniques to improve data-reuse and reduce I/O requirements. An Arithmetic Logic Unit (ALU) is further proposed which can better balance resource utilization, new feature support, and efficiency of the whole system. We have successfully deployed more than 100 CNN models with our accelerator. Our experimental results show that the 96-AIE-core implementation can achieve 1653 frames per second (FPS) for ResNet50 on VCK190, which is 9.8 × faster than the design on ZCU102 running at 168.5 FPS. The 256-AIE-core implementation can further achieve 4050 FPS. We propose a tilling strategy to achieve feature-map-stationary (FMS) for high-definition CNN (HD-CNN) with the accelerator, achieving 3.8 × FPS improvement on Residual Channel Attention Network (RCAN) and 3.1 × on Super-Efficient Super-Resolution (SESR). This accelerator can also solve the 3D convolution task in disparity estimation, achieving end-to-end (E2E) performance of 10.1FPS with all the optimizations.\",\"PeriodicalId\":49248,\"journal\":{\"name\":\"ACM Transactions on Reconfigurable Technology and Systems\",\"volume\":\"170 1\",\"pages\":\"0\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Reconfigurable Technology and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3617836\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Reconfigurable Technology and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3617836","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
XVDPU: A High Performance CNN Accelerator on Versal Platform Powered by AI Engine
Nowadays, convolution neural networks (CNNs) are widely used in computer vision applications. However, the trends of higher accuracy and higher resolution generate larger networks. The requirements of computation or I/O are the key bottlenecks. In this paper, we propose XVDPU: the AI-Engine (AIE)-based CNN accelerator on Versal chips to meet heavy computation requirements. To resolve IO bottleneck, we adopt several techniques to improve data-reuse and reduce I/O requirements. An Arithmetic Logic Unit (ALU) is further proposed which can better balance resource utilization, new feature support, and efficiency of the whole system. We have successfully deployed more than 100 CNN models with our accelerator. Our experimental results show that the 96-AIE-core implementation can achieve 1653 frames per second (FPS) for ResNet50 on VCK190, which is 9.8 × faster than the design on ZCU102 running at 168.5 FPS. The 256-AIE-core implementation can further achieve 4050 FPS. We propose a tilling strategy to achieve feature-map-stationary (FMS) for high-definition CNN (HD-CNN) with the accelerator, achieving 3.8 × FPS improvement on Residual Channel Attention Network (RCAN) and 3.1 × on Super-Efficient Super-Resolution (SESR). This accelerator can also solve the 3D convolution task in disparity estimation, achieving end-to-end (E2E) performance of 10.1FPS with all the optimizations.
期刊介绍:
TRETS is the top journal focusing on research in, on, and with reconfigurable systems and on their underlying technology. The scope, rationale, and coverage by other journals are often limited to particular aspects of reconfigurable technology or reconfigurable systems. TRETS is a journal that covers reconfigurability in its own right.
Topics that would be appropriate for TRETS would include all levels of reconfigurable system abstractions and all aspects of reconfigurable technology including platforms, programming environments and application successes that support these systems for computing or other applications.
-The board and systems architectures of a reconfigurable platform.
-Programming environments of reconfigurable systems, especially those designed for use with reconfigurable systems that will lead to increased programmer productivity.
-Languages and compilers for reconfigurable systems.
-Logic synthesis and related tools, as they relate to reconfigurable systems.
-Applications on which success can be demonstrated.
The underlying technology from which reconfigurable systems are developed. (Currently this technology is that of FPGAs, but research on the nature and use of follow-on technologies is appropriate for TRETS.)
In considering whether a paper is suitable for TRETS, the foremost question should be whether reconfigurability has been essential to success. Topics such as architecture, programming languages, compilers, and environments, logic synthesis, and high performance applications are all suitable if the context is appropriate. For example, an architecture for an embedded application that happens to use FPGAs is not necessarily suitable for TRETS, but an architecture using FPGAs for which the reconfigurability of the FPGAs is an inherent part of the specifications (perhaps due to a need for re-use on multiple applications) would be appropriate for TRETS.