用 ConvFIFO 增强 ConvNets:基于内核静态先进先出数据流的跨条 PIM 架构

IF 2.8 2区 工程技术 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yu Qian;Liang Zhao;Fanzi Meng;Xiapeng Xu;Cheng Zhuo;Xunzhao Yin
{"title":"用 ConvFIFO 增强 ConvNets:基于内核静态先进先出数据流的跨条 PIM 架构","authors":"Yu Qian;Liang Zhao;Fanzi Meng;Xiapeng Xu;Cheng Zhuo;Xunzhao Yin","doi":"10.1109/TVLSI.2024.3409648","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (ConvNets) have long been the model of choice for computer vision (CV) problems and gained renewed traction lately. In order to compute ConvNets more efficiently, process-in-memory (PIM) architectures based on emerging non-volatile memories (NVMs) such as RRAM have been widely studied. However, conventional NVM-based PIM suffered from various non-idealities including IR drop, sneak-path currents, large analog-to-digital converter (ADC) overhead, device variations, circuits mismatch, and error propagation. In this work, we propose ConvFIFO, a crossbar-memory-based PIM architecture for ConvNets featuring a kernel-stationary dataflow. Through the design of FIFO-type input and output buffers, smaller row-activation parallelism, and more compact ADCs, ConvFIFO can maximize the reuse rates of inputs and partial sums to achieve a more balanced trade-off among throughput, accuracy, and area/energy consumption. Using SRAM-based FIFO as the input/output buffer, ConvFIFO achieves a systolic architecture without the need to move weight data, bypassing the limitation of NVM endurance and minimizing the movement of partial sums. Moreover, the FIFO nature of the dataflow allows flexible pipeline design and load balancing. Compared to classical NVM-based PIM architectures such as ISAAC, ConvFIFO exhibits significant performance enhancement for various ConvNet models, showing 1.66–\n<inline-formula> <tex-math>$1.69\\times $ </tex-math></inline-formula>\n/1.69–\n<inline-formula> <tex-math>$1.74\\times $ </tex-math></inline-formula>\n/4.23–\n<inline-formula> <tex-math>$4.79\\times $ </tex-math></inline-formula>\n/1.59–\n<inline-formula> <tex-math>$1.74\\times $ </tex-math></inline-formula>\n improvement in terms of energy consumption, latency, Ops/W, and Ops/s\n<inline-formula> <tex-math>$\\times $ </tex-math></inline-formula>\nmm2, respectively. Compared to GPUs, ConvFIFO exhibits only an average accuracy loss of 1.82% during inference.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"32 9","pages":"1640-1651"},"PeriodicalIF":2.8000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing ConvNets With ConvFIFO: A Crossbar PIM Architecture Based on Kernel-Stationary First-In-First-Out Dataflow\",\"authors\":\"Yu Qian;Liang Zhao;Fanzi Meng;Xiapeng Xu;Cheng Zhuo;Xunzhao Yin\",\"doi\":\"10.1109/TVLSI.2024.3409648\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural networks (ConvNets) have long been the model of choice for computer vision (CV) problems and gained renewed traction lately. In order to compute ConvNets more efficiently, process-in-memory (PIM) architectures based on emerging non-volatile memories (NVMs) such as RRAM have been widely studied. However, conventional NVM-based PIM suffered from various non-idealities including IR drop, sneak-path currents, large analog-to-digital converter (ADC) overhead, device variations, circuits mismatch, and error propagation. In this work, we propose ConvFIFO, a crossbar-memory-based PIM architecture for ConvNets featuring a kernel-stationary dataflow. Through the design of FIFO-type input and output buffers, smaller row-activation parallelism, and more compact ADCs, ConvFIFO can maximize the reuse rates of inputs and partial sums to achieve a more balanced trade-off among throughput, accuracy, and area/energy consumption. Using SRAM-based FIFO as the input/output buffer, ConvFIFO achieves a systolic architecture without the need to move weight data, bypassing the limitation of NVM endurance and minimizing the movement of partial sums. Moreover, the FIFO nature of the dataflow allows flexible pipeline design and load balancing. Compared to classical NVM-based PIM architectures such as ISAAC, ConvFIFO exhibits significant performance enhancement for various ConvNet models, showing 1.66–\\n<inline-formula> <tex-math>$1.69\\\\times $ </tex-math></inline-formula>\\n/1.69–\\n<inline-formula> <tex-math>$1.74\\\\times $ </tex-math></inline-formula>\\n/4.23–\\n<inline-formula> <tex-math>$4.79\\\\times $ </tex-math></inline-formula>\\n/1.59–\\n<inline-formula> <tex-math>$1.74\\\\times $ </tex-math></inline-formula>\\n improvement in terms of energy consumption, latency, Ops/W, and Ops/s\\n<inline-formula> <tex-math>$\\\\times $ </tex-math></inline-formula>\\nmm2, respectively. Compared to GPUs, ConvFIFO exhibits only an average accuracy loss of 1.82% during inference.\",\"PeriodicalId\":13425,\"journal\":{\"name\":\"IEEE Transactions on Very Large Scale Integration (VLSI) Systems\",\"volume\":\"32 9\",\"pages\":\"1640-1651\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Very Large Scale Integration (VLSI) Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10559950/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10559950/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

长期以来,卷积神经网络(ConvNets)一直是计算机视觉(CV)问题的首选模型,近来再次受到关注。为了更高效地计算 ConvNets,人们广泛研究了基于 RRAM 等新兴非易失性存储器(NVM)的内存进程(PIM)架构。然而,传统的基于 NVM 的 PIM 存在各种非理想情况,包括 IR 下降、潜行路径电流、模数转换器(ADC)开销大、器件变化、电路不匹配和错误传播。在这项工作中,我们提出了 ConvFIFO,这是一种基于交叉条内存的 PIM 架构,适用于具有内核静态数据流的 ConvNets。通过设计 FIFO 型输入和输出缓冲器、更小的行激活并行性和更紧凑的 ADC,ConvFIFO 可以最大限度地提高输入和部分和的重用率,从而在吞吐量、精度和面积/能耗之间实现更平衡的权衡。ConvFIFO 使用基于 SRAM 的 FIFO 作为输入/输出缓冲器,实现了无需移动加权数据的收缩架构,绕过了 NVM 耐用性的限制,并最大限度地减少了部分和的移动。此外,数据流的 FIFO 特性允许灵活的流水线设计和负载平衡。与经典的基于NVM的PIM架构(如ISAAC)相比,ConvFIFO在各种ConvNet模型中都表现出了显著的性能提升,在能耗、延迟、Ops/W和Ops/s $/times $ mm2方面分别显示出1.66- $1.69/times $ /1.69- $1.74/times $ /4.23- $4.79/times $ /1.59- $1.74/times $的提升。与 GPU 相比,ConvFIFO 在推理过程中的平均精度损失仅为 1.82%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing ConvNets With ConvFIFO: A Crossbar PIM Architecture Based on Kernel-Stationary First-In-First-Out Dataflow
Convolutional neural networks (ConvNets) have long been the model of choice for computer vision (CV) problems and gained renewed traction lately. In order to compute ConvNets more efficiently, process-in-memory (PIM) architectures based on emerging non-volatile memories (NVMs) such as RRAM have been widely studied. However, conventional NVM-based PIM suffered from various non-idealities including IR drop, sneak-path currents, large analog-to-digital converter (ADC) overhead, device variations, circuits mismatch, and error propagation. In this work, we propose ConvFIFO, a crossbar-memory-based PIM architecture for ConvNets featuring a kernel-stationary dataflow. Through the design of FIFO-type input and output buffers, smaller row-activation parallelism, and more compact ADCs, ConvFIFO can maximize the reuse rates of inputs and partial sums to achieve a more balanced trade-off among throughput, accuracy, and area/energy consumption. Using SRAM-based FIFO as the input/output buffer, ConvFIFO achieves a systolic architecture without the need to move weight data, bypassing the limitation of NVM endurance and minimizing the movement of partial sums. Moreover, the FIFO nature of the dataflow allows flexible pipeline design and load balancing. Compared to classical NVM-based PIM architectures such as ISAAC, ConvFIFO exhibits significant performance enhancement for various ConvNet models, showing 1.66– $1.69\times $ /1.69– $1.74\times $ /4.23– $4.79\times $ /1.59– $1.74\times $ improvement in terms of energy consumption, latency, Ops/W, and Ops/s $\times $ mm2, respectively. Compared to GPUs, ConvFIFO exhibits only an average accuracy loss of 1.82% during inference.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.40
自引率
7.10%
发文量
187
审稿时长
3.6 months
期刊介绍: The IEEE Transactions on VLSI Systems is published as a monthly journal under the co-sponsorship of the IEEE Circuits and Systems Society, the IEEE Computer Society, and the IEEE Solid-State Circuits Society. Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels. To address this critical area through a common forum, the IEEE Transactions on VLSI Systems have been founded. The editorial board, consisting of international experts, invites original papers which emphasize and merit the novel systems integration aspects of microelectronic systems including interactions among systems design and partitioning, logic and memory design, digital and analog circuit design, layout synthesis, CAD tools, chips and wafer fabrication, testing and packaging, and systems level qualification. Thus, the coverage of these Transactions will focus on VLSI/ULSI microelectronic systems integration.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信