{"title":"FPGA上面向流内核的OpenCL性能优化评估(摘要)","authors":"Zheming Jin, H. Finkel","doi":"10.1145/3174243.3174967","DOIUrl":null,"url":null,"abstract":"The streaming applications efficiently and High-level synthesis (HLS) tools allow people without complex hardware design knowledge to evaluate an application on FPGAs, there is an opportunity and a need to understand where OpenCL and FPGA can play in the streaming domains. To this end, we evaluate the overhead of the OpenCL infrastructure on the Nallatech 385A FPGA board that features an Arria 10 GX1150 FPGA. Then we explore the implementation space and discuss the performance optimization techniques for the streaming kernels using the OpenCL-to-FPGA HLS tool. On the target platform, the infrastructure overhead requires 12% of the FPGA memory and logic resources. The latency of the single work-item kernel execution is 11 us and the maximum frequency of a kernel implementation is around 300 MHz. The experimental results of the streaming kernels show FPGA resources, such as block RAMs and DSPs, can limit the kernel performance before the constraint of memory bandwidth takes effect. Kernel vectorization and compute unit duplication are practical optimization techniques that can improve the kernel performance by a factor of 2 to 10. The combination of the two techniques can achieve the best performance. To improve the performance of compute unit duplication, the local work size needs to be tuned and the optimal value can increase the performance by a factor of 3 to 70 compared to the default value.","PeriodicalId":164936,"journal":{"name":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Evaluation of OpenCL Performance-oriented Optimizations for Streaming Kernels on the FPGA: (Abstract Only)\",\"authors\":\"Zheming Jin, H. Finkel\",\"doi\":\"10.1145/3174243.3174967\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The streaming applications efficiently and High-level synthesis (HLS) tools allow people without complex hardware design knowledge to evaluate an application on FPGAs, there is an opportunity and a need to understand where OpenCL and FPGA can play in the streaming domains. To this end, we evaluate the overhead of the OpenCL infrastructure on the Nallatech 385A FPGA board that features an Arria 10 GX1150 FPGA. Then we explore the implementation space and discuss the performance optimization techniques for the streaming kernels using the OpenCL-to-FPGA HLS tool. On the target platform, the infrastructure overhead requires 12% of the FPGA memory and logic resources. The latency of the single work-item kernel execution is 11 us and the maximum frequency of a kernel implementation is around 300 MHz. The experimental results of the streaming kernels show FPGA resources, such as block RAMs and DSPs, can limit the kernel performance before the constraint of memory bandwidth takes effect. Kernel vectorization and compute unit duplication are practical optimization techniques that can improve the kernel performance by a factor of 2 to 10. The combination of the two techniques can achieve the best performance. To improve the performance of compute unit duplication, the local work size needs to be tuned and the optimal value can increase the performance by a factor of 3 to 70 compared to the default value.\",\"PeriodicalId\":164936,\"journal\":{\"name\":\"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3174243.3174967\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3174243.3174967","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluation of OpenCL Performance-oriented Optimizations for Streaming Kernels on the FPGA: (Abstract Only)
The streaming applications efficiently and High-level synthesis (HLS) tools allow people without complex hardware design knowledge to evaluate an application on FPGAs, there is an opportunity and a need to understand where OpenCL and FPGA can play in the streaming domains. To this end, we evaluate the overhead of the OpenCL infrastructure on the Nallatech 385A FPGA board that features an Arria 10 GX1150 FPGA. Then we explore the implementation space and discuss the performance optimization techniques for the streaming kernels using the OpenCL-to-FPGA HLS tool. On the target platform, the infrastructure overhead requires 12% of the FPGA memory and logic resources. The latency of the single work-item kernel execution is 11 us and the maximum frequency of a kernel implementation is around 300 MHz. The experimental results of the streaming kernels show FPGA resources, such as block RAMs and DSPs, can limit the kernel performance before the constraint of memory bandwidth takes effect. Kernel vectorization and compute unit duplication are practical optimization techniques that can improve the kernel performance by a factor of 2 to 10. The combination of the two techniques can achieve the best performance. To improve the performance of compute unit duplication, the local work size needs to be tuned and the optimal value can increase the performance by a factor of 3 to 70 compared to the default value.