端到端基于cnn分类的最优硬件实现

S. Aydin, H. Ş. Bilge
{"title":"端到端基于cnn分类的最优硬件实现","authors":"S. Aydin, H. Ş. Bilge","doi":"10.1109/ICITIIT57246.2023.10068601","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNN) show promising results in many fields, especially in computer vision tasks. However, implementing these networks requires computationally intensive operations. Increasing computational workloads makes it difficult to use CNN models in real-time applications. To overcome these challenges, CNN must be implemented on a dedicated hardware platform such as a field-programmable gate array (FPGA). The parallel processing and reconfigurable features of FPGA hardware make it suitable for real-time applications. Nevertheless, due to limited resources and memory units, various optimizations must be applied prior to implementing processing-intensive structures. Both the resources and the memory units used in hardware applications are affected by the data types and byte lengths used to display data. This study proposes arbitrary, precision fixed-point data types for optimal end-to-end CNN hardware implementation. The network was trained on the Central Processing Unit (CPU) to address the classification problem. The CNN architecture was implemented on a Zynq-7 ZC702 evaluation board with a target device xc7z020clg484-1 platform utilizing high level synthesis (HLS) for the inference stage, based on the calculated weight parameters, and predetermined hyperparameters. The proposed implementation produced the results in 0.00329 s based on hardware implementation. In terms of latency metrics, the hardware-based CNN application produced a response approximately 18.9 times faster than the CPU-based CNN application in the inference phase while retaining the same accuracy. In terms of memory utilization and calculation units, the proposed design uses 52% fewer memory units and 68% fewer calculation units than the baseline design. While the proposed method used fewer resources, the classification success remained at 98.9%.","PeriodicalId":170485,"journal":{"name":"2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimal hardware implementation for end-to-end CNN-based classification\",\"authors\":\"S. Aydin, H. Ş. Bilge\",\"doi\":\"10.1109/ICITIIT57246.2023.10068601\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural networks (CNN) show promising results in many fields, especially in computer vision tasks. However, implementing these networks requires computationally intensive operations. Increasing computational workloads makes it difficult to use CNN models in real-time applications. To overcome these challenges, CNN must be implemented on a dedicated hardware platform such as a field-programmable gate array (FPGA). The parallel processing and reconfigurable features of FPGA hardware make it suitable for real-time applications. Nevertheless, due to limited resources and memory units, various optimizations must be applied prior to implementing processing-intensive structures. Both the resources and the memory units used in hardware applications are affected by the data types and byte lengths used to display data. This study proposes arbitrary, precision fixed-point data types for optimal end-to-end CNN hardware implementation. The network was trained on the Central Processing Unit (CPU) to address the classification problem. The CNN architecture was implemented on a Zynq-7 ZC702 evaluation board with a target device xc7z020clg484-1 platform utilizing high level synthesis (HLS) for the inference stage, based on the calculated weight parameters, and predetermined hyperparameters. The proposed implementation produced the results in 0.00329 s based on hardware implementation. In terms of latency metrics, the hardware-based CNN application produced a response approximately 18.9 times faster than the CPU-based CNN application in the inference phase while retaining the same accuracy. In terms of memory utilization and calculation units, the proposed design uses 52% fewer memory units and 68% fewer calculation units than the baseline design. While the proposed method used fewer resources, the classification success remained at 98.9%.\",\"PeriodicalId\":170485,\"journal\":{\"name\":\"2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICITIIT57246.2023.10068601\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICITIIT57246.2023.10068601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

卷积神经网络(CNN)在许多领域,特别是在计算机视觉任务中显示出有希望的结果。然而,实现这些网络需要计算密集型的操作。不断增加的计算工作量使得在实时应用中使用CNN模型变得困难。为了克服这些挑战,CNN必须在专用的硬件平台上实现,例如现场可编程门阵列(FPGA)。FPGA硬件的并行处理和可重构特性使其适合于实时应用。然而,由于有限的资源和内存单元,在实现处理密集型结构之前必须应用各种优化。硬件应用程序中使用的资源和内存单元都受到用于显示数据的数据类型和字节长度的影响。本研究提出了任意、精确的定点数据类型,以实现最佳的端到端CNN硬件实现。该网络在中央处理器(CPU)上进行训练,以解决分类问题。CNN架构在Zynq-7 ZC702评估板上实现,目标器件xc7z020clg484-1平台利用高水平综合(HLS)作为推理阶段,基于计算的权重参数和预定的超参数。基于硬件实现,建议的实现在0.00329秒内产生结果。在延迟指标方面,在保持相同精度的情况下,基于硬件的CNN应用程序在推理阶段产生的响应速度比基于cpu的CNN应用程序快18.9倍。在内存利用率和计算单元方面,与基线设计相比,建议设计使用的内存单元减少了52%,计算单元减少了68%。虽然所提出的方法使用的资源较少,但分类成功率仍保持在98.9%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimal hardware implementation for end-to-end CNN-based classification
Convolutional neural networks (CNN) show promising results in many fields, especially in computer vision tasks. However, implementing these networks requires computationally intensive operations. Increasing computational workloads makes it difficult to use CNN models in real-time applications. To overcome these challenges, CNN must be implemented on a dedicated hardware platform such as a field-programmable gate array (FPGA). The parallel processing and reconfigurable features of FPGA hardware make it suitable for real-time applications. Nevertheless, due to limited resources and memory units, various optimizations must be applied prior to implementing processing-intensive structures. Both the resources and the memory units used in hardware applications are affected by the data types and byte lengths used to display data. This study proposes arbitrary, precision fixed-point data types for optimal end-to-end CNN hardware implementation. The network was trained on the Central Processing Unit (CPU) to address the classification problem. The CNN architecture was implemented on a Zynq-7 ZC702 evaluation board with a target device xc7z020clg484-1 platform utilizing high level synthesis (HLS) for the inference stage, based on the calculated weight parameters, and predetermined hyperparameters. The proposed implementation produced the results in 0.00329 s based on hardware implementation. In terms of latency metrics, the hardware-based CNN application produced a response approximately 18.9 times faster than the CPU-based CNN application in the inference phase while retaining the same accuracy. In terms of memory utilization and calculation units, the proposed design uses 52% fewer memory units and 68% fewer calculation units than the baseline design. While the proposed method used fewer resources, the classification success remained at 98.9%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信