Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach?

Shaahin Angizi, Zhezhi He, D. Reis, X. Hu, Wilman Tsai, Shy-Jay Lin, Deliang Fan
{"title":"Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach?","authors":"Shaahin Angizi, Zhezhi He, D. Reis, X. Hu, Wilman Tsai, Shy-Jay Lin, Deliang Fan","doi":"10.1109/ISVLSI.2019.00044","DOIUrl":null,"url":null,"abstract":"Nowadays, research topics on AI accelerator designs have attracted great interest, where accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM) platforms is an actively-explored direction with great potential. PIM platforms, which simultaneously aims to address power- and memory-wall bottlenecks, have shown orders of performance enhancement in comparison to the conventional computing platforms with Von-Neumann architecture. As one direction of accelerating DNN in PIM, resistive memory array (aka. crossbar) has drawn great research interest owing to its analog current-mode weighted summation operation which intrinsically matches the dominant Multiplication-and-Accumulation (MAC) operation in DNN, making it one of the most promising candidates. An alternative direction for PIM-based DNN acceleration is through bulk bit-wise logic operations directly performed on the content in digital memories. Thanks to the high fault-tolerant characteristic of DNN, the latest algorithmic progression successfully quantized DNN parameters to low bit-width representations, while maintaining competitive accuracy levels. Such DNN quantization techniques essentially convert MAC operation to much simpler addition/subtraction or comparison operations, which can be performed by bulk bit-wise logic operations in a highly parallel fashion. In this paper, we build a comprehensive evaluation framework to quantitatively compare and analyze aforementioned PIM based analog and digital approaches for DNN acceleration.","PeriodicalId":6703,"journal":{"name":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"79 1","pages":"197-202"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2019.00044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

Abstract

Nowadays, research topics on AI accelerator designs have attracted great interest, where accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM) platforms is an actively-explored direction with great potential. PIM platforms, which simultaneously aims to address power- and memory-wall bottlenecks, have shown orders of performance enhancement in comparison to the conventional computing platforms with Von-Neumann architecture. As one direction of accelerating DNN in PIM, resistive memory array (aka. crossbar) has drawn great research interest owing to its analog current-mode weighted summation operation which intrinsically matches the dominant Multiplication-and-Accumulation (MAC) operation in DNN, making it one of the most promising candidates. An alternative direction for PIM-based DNN acceleration is through bulk bit-wise logic operations directly performed on the content in digital memories. Thanks to the high fault-tolerant characteristic of DNN, the latest algorithmic progression successfully quantized DNN parameters to low bit-width representations, while maintaining competitive accuracy levels. Such DNN quantization techniques essentially convert MAC operation to much simpler addition/subtraction or comparison operations, which can be performed by bulk bit-wise logic operations in a highly parallel fashion. In this paper, we build a comprehensive evaluation framework to quantitatively compare and analyze aforementioned PIM based analog and digital approaches for DNN acceleration.
在内存处理平台中加速深度神经网络:模拟还是数字方法?
目前,人工智能加速器设计的研究课题引起了人们的极大兴趣,其中利用内存中处理(Processing-in-Memory, PIM)平台加速深度神经网络(Deep Neural Network, DNN)是一个积极探索且潜力巨大的方向。PIM平台同时致力于解决功耗和内存瓶颈问题,与采用冯-诺伊曼架构的传统计算平台相比,PIM平台表现出了数量级的性能提升。作为PIM中加速DNN的一个方向,电阻式存储阵列(又称记忆阵列)。由于其模拟电流模式加权求和运算本质上与DNN中占主导地位的乘法累加(MAC)运算相匹配,因此引起了极大的研究兴趣,使其成为最有前途的候选算法之一。基于pim的DNN加速的另一个方向是通过直接对数字存储器中的内容执行批量逐位逻辑运算。由于深度神经网络的高容错特性,最新的算法进展成功地将深度神经网络参数量化为低位宽表示,同时保持有竞争力的精度水平。这种DNN量化技术本质上将MAC操作转换为更简单的加法/减法或比较操作,这些操作可以通过高度并行的批量按位逻辑操作来执行。在本文中,我们建立了一个全面的评估框架,以定量地比较和分析上述基于PIM的DNN加速模拟和数字方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信