Deep Learning Acceleration using Digital-Based Processing In-Memory

M. Imani, Saransh Gupta, Yeseong Kim, T. Simunic
{"title":"Deep Learning Acceleration using Digital-Based Processing In-Memory","authors":"M. Imani, Saransh Gupta, Yeseong Kim, T. Simunic","doi":"10.1109/socc49529.2020.9524776","DOIUrl":null,"url":null,"abstract":"Processing In-Memory (PIM) has shown a great potential to accelerate inference tasks of Convolutional Neural Network (CNN). However, existing PIM architectures do not support high precision computation, e.g., in floating point precision, which is essential for training accurate CNN models. In addition, most of the existing PIM approaches require analog/mixed-signal circuits, which do not scale, exploiting insufficiently reliable multi-bit Non-Volatile Memory (NVM). In this paper, we propose FloatPIM, a fully-digital scalable PIM architecture that accelerates CNN in both training and testing phases. FloatPIM natively supports floating-point representation, thus enabling accurate CNN training. FloatPIM also enables fast communication between neighboring memory blocks to reduce internal data movement of the PIM architecture. We break the CNN computation into computing and data transfer modes. In computing mode, all blocks are processing a part of CNN training/testing in parallel, while in data transfer mode Float-PIM enables fast and row-parallel communication between the neighbor blocks. Our evaluation shows that FloatPIM training is on average 303.2 and 48.6 (4.3x and 15.8x) faster and more energy efficient as compared to GTX 1080 GPU (PipeLayer [1] PIM accelerator).","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/socc49529.2020.9524776","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Processing In-Memory (PIM) has shown a great potential to accelerate inference tasks of Convolutional Neural Network (CNN). However, existing PIM architectures do not support high precision computation, e.g., in floating point precision, which is essential for training accurate CNN models. In addition, most of the existing PIM approaches require analog/mixed-signal circuits, which do not scale, exploiting insufficiently reliable multi-bit Non-Volatile Memory (NVM). In this paper, we propose FloatPIM, a fully-digital scalable PIM architecture that accelerates CNN in both training and testing phases. FloatPIM natively supports floating-point representation, thus enabling accurate CNN training. FloatPIM also enables fast communication between neighboring memory blocks to reduce internal data movement of the PIM architecture. We break the CNN computation into computing and data transfer modes. In computing mode, all blocks are processing a part of CNN training/testing in parallel, while in data transfer mode Float-PIM enables fast and row-parallel communication between the neighbor blocks. Our evaluation shows that FloatPIM training is on average 303.2 and 48.6 (4.3x and 15.8x) faster and more energy efficient as compared to GTX 1080 GPU (PipeLayer [1] PIM accelerator).
内存处理(PIM)在加速卷积神经网络(CNN)推理任务方面显示出巨大的潜力。然而,现有的PIM架构不支持高精度计算,例如浮点精度,这对于训练准确的CNN模型至关重要。此外,大多数现有的PIM方法都需要模拟/混合信号电路,这些电路不能扩展,利用不足够可靠的多位非易失性存储器(NVM)。在本文中,我们提出了FloatPIM,这是一种全数字可扩展的PIM架构,可以在训练和测试阶段加速CNN。FloatPIM原生支持浮点表示,从而实现准确的CNN训练。FloatPIM还支持相邻内存块之间的快速通信,以减少PIM体系结构的内部数据移动。我们将CNN的计算分为计算模式和数据传输模式。在计算模式下,所有块都是并行处理CNN训练/测试的一部分,而在数据传输模式下,Float-PIM实现了相邻块之间的快速行并行通信。我们的评估显示,与GTX 1080 GPU (PipeLayer [1] PIM加速器)相比,FloatPIM训练的平均速度为303.2和48.6(4.3倍和15.8倍),效率更高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信