基于fpga的人工智能推理的细粒度结构化稀疏计算

IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Chen Zhang;Shijie Cao;Guohao Dai;Chenbo Geng;Zhuliang Yao;Wencong Xiao;Yunxin Liu;Ming Wu;Lintao Zhang;Guangyu Sun;Zhigang Ji;Runsheng Wang;Ru Huang
{"title":"基于fpga的人工智能推理的细粒度结构化稀疏计算","authors":"Chen Zhang;Shijie Cao;Guohao Dai;Chenbo Geng;Zhuliang Yao;Wencong Xiao;Yunxin Liu;Ming Wu;Lintao Zhang;Guangyu Sun;Zhigang Ji;Runsheng Wang;Ru Huang","doi":"10.1109/TCAD.2024.3524356","DOIUrl":null,"url":null,"abstract":"With the explosive growth in the number of parameters in deep neural networks (DNNs), sparsity-centric algorithm and hardware designs have become critical for low-latency AI serving systems. However, the inherent randomness in pruning methods often leads to fragmented data access and irregular computation patterns in sparse matrices, resulting in significantly reduced hardware efficiency. Addressing the balance between the ‘randomness’ required to maintain model accuracy and the ‘regularity’ needed for efficient hardware design is crucial for realizing effective sparse computing in AI. This article proposes a fine-grained structured sparsity (FSS) paradigm. The pruned sparse matrices in this paradigm exhibit characteristics of ‘local randomness’ and ‘global regularity’. This dual-feature design allows AI accelerator hardware based on the FSS paradigm to maintain both high model accuracy and efficient hardware design. We implemented this novel accelerator on the Xilinx Alveo U280 and validated our concept across three different AI models, including CNN, RNN, and LLM, demonstrating performance that significantly outperforms prior methods.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 7","pages":"2544-2557"},"PeriodicalIF":2.9000,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fine-Grained Structured Sparse Computing for FPGA-Based AI Inference\",\"authors\":\"Chen Zhang;Shijie Cao;Guohao Dai;Chenbo Geng;Zhuliang Yao;Wencong Xiao;Yunxin Liu;Ming Wu;Lintao Zhang;Guangyu Sun;Zhigang Ji;Runsheng Wang;Ru Huang\",\"doi\":\"10.1109/TCAD.2024.3524356\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the explosive growth in the number of parameters in deep neural networks (DNNs), sparsity-centric algorithm and hardware designs have become critical for low-latency AI serving systems. However, the inherent randomness in pruning methods often leads to fragmented data access and irregular computation patterns in sparse matrices, resulting in significantly reduced hardware efficiency. Addressing the balance between the ‘randomness’ required to maintain model accuracy and the ‘regularity’ needed for efficient hardware design is crucial for realizing effective sparse computing in AI. This article proposes a fine-grained structured sparsity (FSS) paradigm. The pruned sparse matrices in this paradigm exhibit characteristics of ‘local randomness’ and ‘global regularity’. This dual-feature design allows AI accelerator hardware based on the FSS paradigm to maintain both high model accuracy and efficient hardware design. We implemented this novel accelerator on the Xilinx Alveo U280 and validated our concept across three different AI models, including CNN, RNN, and LLM, demonstrating performance that significantly outperforms prior methods.\",\"PeriodicalId\":13251,\"journal\":{\"name\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"volume\":\"44 7\",\"pages\":\"2544-2557\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10818746/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10818746/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

随着深度神经网络(dnn)参数数量的爆炸式增长,以稀疏为中心的算法和硬件设计对于低延迟人工智能服务系统变得至关重要。然而,剪枝方法固有的随机性往往导致数据访问碎片化和稀疏矩阵计算模式不规范,导致硬件效率显著降低。解决维持模型准确性所需的“随机性”和高效硬件设计所需的“规律性”之间的平衡对于实现人工智能中有效的稀疏计算至关重要。本文提出了一种细粒度结构化稀疏(FSS)范式。这种范例中的修剪稀疏矩阵表现出“局部随机性”和“全局规律性”的特征。这种双功能设计允许基于FSS范例的AI加速器硬件保持高模型精度和高效的硬件设计。我们在Xilinx Alveo U280上实现了这种新型加速器,并在三种不同的人工智能模型(包括CNN、RNN和LLM)上验证了我们的概念,结果表明,这种加速器的性能明显优于之前的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fine-Grained Structured Sparse Computing for FPGA-Based AI Inference
With the explosive growth in the number of parameters in deep neural networks (DNNs), sparsity-centric algorithm and hardware designs have become critical for low-latency AI serving systems. However, the inherent randomness in pruning methods often leads to fragmented data access and irregular computation patterns in sparse matrices, resulting in significantly reduced hardware efficiency. Addressing the balance between the ‘randomness’ required to maintain model accuracy and the ‘regularity’ needed for efficient hardware design is crucial for realizing effective sparse computing in AI. This article proposes a fine-grained structured sparsity (FSS) paradigm. The pruned sparse matrices in this paradigm exhibit characteristics of ‘local randomness’ and ‘global regularity’. This dual-feature design allows AI accelerator hardware based on the FSS paradigm to maintain both high model accuracy and efficient hardware design. We implemented this novel accelerator on the Xilinx Alveo U280 and validated our concept across three different AI models, including CNN, RNN, and LLM, demonstrating performance that significantly outperforms prior methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.60
自引率
13.80%
发文量
500
审稿时长
7 months
期刊介绍: The purpose of this Transactions is to publish papers of interest to individuals in the area of computer-aided design of integrated circuits and systems composed of analog, digital, mixed-signal, optical, or microwave components. The aids include methods, models, algorithms, and man-machine interfaces for system-level, physical and logical design including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, hardware-software co-design and documentation of integrated circuit and system designs of all complexities. Design tools and techniques for evaluating and designing integrated circuits and systems for metrics such as performance, power, reliability, testability, and security are a focus.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信