CASCADE:利用串联和刷新数据流的 CNN 加速器合成框架

IF 5.2 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Qingyu Guo;Haoyang Luo;Meng Li;Xiyuan Tang;Yuan Wang
{"title":"CASCADE:利用串联和刷新数据流的 CNN 加速器合成框架","authors":"Qingyu Guo;Haoyang Luo;Meng Li;Xiyuan Tang;Yuan Wang","doi":"10.1109/TCSI.2024.3452954","DOIUrl":null,"url":null,"abstract":"Layer Pipeline (LP) represents an innovative architecture for neural network accelerators, which implements task-level pipelining at the granularity of layers. Despite improvements in throughput, LP architectures face challenges due to complicated dataflow design, intricate design space and high resource requirements. In this paper, we introduce an accelerator synthesis framework, CASCADE. CASCADE leverages a novel dataflow, CARD, to efficiently manage convolutional operations’ irregular memory access patterns using simplified logic and minimal buffers. It also employs advanced design space exploration methods to optimize unrolling parallelism and FIFO depth settings automatically for each layer. Finally, to further enhance resource efficiency, CASCADE leverages Lookup Table-based multiplication and accumulation units. With extensive experimental results, we demonstrate that CASCADE significantly outperforms existing works, achieving a \n<inline-formula> <tex-math>$3\\times $ </tex-math></inline-formula>\n improvement in resource efficiency and a \n<inline-formula> <tex-math>$4\\times $ </tex-math></inline-formula>\n improvement in power efficiency. It achieves over \n<inline-formula> <tex-math>$1.5\\times 10^{4}$ </tex-math></inline-formula>\n frames per second throughput and 71.9% accuracy on ImageNet.","PeriodicalId":13039,"journal":{"name":"IEEE Transactions on Circuits and Systems I: Regular Papers","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CASCADE: A Framework for CNN Accelerator Synthesis With Concatenation and Refreshing Dataflow\",\"authors\":\"Qingyu Guo;Haoyang Luo;Meng Li;Xiyuan Tang;Yuan Wang\",\"doi\":\"10.1109/TCSI.2024.3452954\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Layer Pipeline (LP) represents an innovative architecture for neural network accelerators, which implements task-level pipelining at the granularity of layers. Despite improvements in throughput, LP architectures face challenges due to complicated dataflow design, intricate design space and high resource requirements. In this paper, we introduce an accelerator synthesis framework, CASCADE. CASCADE leverages a novel dataflow, CARD, to efficiently manage convolutional operations’ irregular memory access patterns using simplified logic and minimal buffers. It also employs advanced design space exploration methods to optimize unrolling parallelism and FIFO depth settings automatically for each layer. Finally, to further enhance resource efficiency, CASCADE leverages Lookup Table-based multiplication and accumulation units. With extensive experimental results, we demonstrate that CASCADE significantly outperforms existing works, achieving a \\n<inline-formula> <tex-math>$3\\\\times $ </tex-math></inline-formula>\\n improvement in resource efficiency and a \\n<inline-formula> <tex-math>$4\\\\times $ </tex-math></inline-formula>\\n improvement in power efficiency. It achieves over \\n<inline-formula> <tex-math>$1.5\\\\times 10^{4}$ </tex-math></inline-formula>\\n frames per second throughput and 71.9% accuracy on ImageNet.\",\"PeriodicalId\":13039,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems I: Regular Papers\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems I: Regular Papers\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10701568/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems I: Regular Papers","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10701568/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

层流水线(LP)是神经网络加速器的创新架构,它以层为粒度实现任务级流水线。尽管吞吐量有所提高,但由于复杂的数据流设计、错综复杂的设计空间和较高的资源要求,LP 架构仍面临挑战。在本文中,我们介绍了一种加速器综合框架 CASCADE。CASCADE 利用新颖的数据流 CARD,使用简化的逻辑和最小的缓冲区有效地管理卷积操作的不规则内存访问模式。它还采用先进的设计空间探索方法,为每一层自动优化开卷并行性和 FIFO 深度设置。最后,为了进一步提高资源效率,CASCADE 利用了基于查找表的乘法和累加单元。通过大量实验结果,我们证明 CASCADE 的性能明显优于现有研究成果,在资源效率方面提高了 3 美元/次,在能效方面提高了 4 美元/次。它在 ImageNet 上实现了每秒超过 1.5 美元(10^{4}$ 帧)的吞吐量和 71.9% 的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CASCADE: A Framework for CNN Accelerator Synthesis With Concatenation and Refreshing Dataflow
Layer Pipeline (LP) represents an innovative architecture for neural network accelerators, which implements task-level pipelining at the granularity of layers. Despite improvements in throughput, LP architectures face challenges due to complicated dataflow design, intricate design space and high resource requirements. In this paper, we introduce an accelerator synthesis framework, CASCADE. CASCADE leverages a novel dataflow, CARD, to efficiently manage convolutional operations’ irregular memory access patterns using simplified logic and minimal buffers. It also employs advanced design space exploration methods to optimize unrolling parallelism and FIFO depth settings automatically for each layer. Finally, to further enhance resource efficiency, CASCADE leverages Lookup Table-based multiplication and accumulation units. With extensive experimental results, we demonstrate that CASCADE significantly outperforms existing works, achieving a $3\times $ improvement in resource efficiency and a $4\times $ improvement in power efficiency. It achieves over $1.5\times 10^{4}$ frames per second throughput and 71.9% accuracy on ImageNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Circuits and Systems I: Regular Papers
IEEE Transactions on Circuits and Systems I: Regular Papers 工程技术-工程:电子与电气
CiteScore
9.80
自引率
11.80%
发文量
441
审稿时长
2 months
期刊介绍: TCAS I publishes regular papers in the field specified by the theory, analysis, design, and practical implementations of circuits, and the application of circuit techniques to systems and to signal processing. Included is the whole spectrum from basic scientific theory to industrial applications. The field of interest covered includes: - Circuits: Analog, Digital and Mixed Signal Circuits and Systems - Nonlinear Circuits and Systems, Integrated Sensors, MEMS and Systems on Chip, Nanoscale Circuits and Systems, Optoelectronic - Circuits and Systems, Power Electronics and Systems - Software for Analog-and-Logic Circuits and Systems - Control aspects of Circuits and Systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信