Hardware/Software Co-Design With ADC-Less In-Memory Computing Hardware for Spiking Neural Networks

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Marco Paul E. Apolinario;Adarsh Kumar Kosta;Utkarsh Saxena;Kaushik Roy
{"title":"Hardware/Software Co-Design With ADC-Less In-Memory Computing Hardware for Spiking Neural Networks","authors":"Marco Paul E. Apolinario;Adarsh Kumar Kosta;Utkarsh Saxena;Kaushik Roy","doi":"10.1109/TETC.2023.3316121","DOIUrl":null,"url":null,"abstract":"Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices. However, commercial edge platforms based on standard GPUs are not optimized to deploy SNNs, resulting in high energy and latency. While analog In-Memory Computing (IMC) platforms can serve as energy-efficient inference engines, they are accursed by the immense energy, latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing the benefits of in-memory computations. We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating the above issues. Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks. Experiments on complex tasks of optical flow estimation and gesture recognition show that progressively increasing the hardware awareness during SNN training allows the model to adapt and learn the errors due to the non-idealities associated with ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and latency improvements, \n<inline-formula><tex-math>$2-7\\times$</tex-math></inline-formula>\n and \n<inline-formula><tex-math>$8.9-24.6\\times$</tex-math></inline-formula>\n, respectively, depending on the SNN model and the workload, compared to HP-ADC IMC.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.1000,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10260275/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices. However, commercial edge platforms based on standard GPUs are not optimized to deploy SNNs, resulting in high energy and latency. While analog In-Memory Computing (IMC) platforms can serve as energy-efficient inference engines, they are accursed by the immense energy, latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing the benefits of in-memory computations. We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating the above issues. Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks. Experiments on complex tasks of optical flow estimation and gesture recognition show that progressively increasing the hardware awareness during SNN training allows the model to adapt and learn the errors due to the non-idealities associated with ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and latency improvements, $2-7\times$ and $8.9-24.6\times$ , respectively, depending on the SNN model and the workload, compared to HP-ADC IMC.
针对尖峰神经网络的无 ADC 内存计算硬件的硬件/软件协同设计
尖峰神经网络(SNN)是一种生物仿真模型,在资源受限的边缘设备上实现高能效的连续任务实施方面具有巨大潜力。然而,基于标准 GPU 的商用边缘平台并未针对部署 SNN 进行优化,从而导致高能耗和高延迟。虽然模拟内存计算(IMC)平台可以作为高能效推理引擎,但高精度模数转换器(HP-ADC)对能耗、延迟和面积的要求使其不堪重负,从而掩盖了内存计算的优势。我们提出了一种硬件/软件协同设计方法,将 SNN 部署到无 ADC IMC 架构中,使用感测放大器作为 1 位 ADC,取代传统的 HP-ADC,从而缓解上述问题。我们提出的框架通过执行硬件感知训练,将准确性降低到最低程度,并且能够从简单的图像分类任务扩展到更复杂的连续回归任务。光流估计和手势识别等复杂任务的实验表明,在 SNN 训练过程中逐步提高硬件感知能力,可使模型适应并学习与无 ADC IMC 相关的非理想性所造成的错误。此外,与 HP-ADC IMC 相比,根据 SNN 模型和工作负载的不同,拟议的无 ADC IMC 能显著改善能耗和延迟,分别为 2-7 美元/次和 8.9-24.6 美元/次。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Emerging Topics in Computing
IEEE Transactions on Emerging Topics in Computing Computer Science-Computer Science (miscellaneous)
CiteScore
12.10
自引率
5.10%
发文量
113
期刊介绍: IEEE Transactions on Emerging Topics in Computing publishes papers on emerging aspects of computer science, computing technology, and computing applications not currently covered by other IEEE Computer Society Transactions. Some examples of emerging topics in computing include: IT for Green, Synthetic and organic computing structures and systems, Advanced analytics, Social/occupational computing, Location-based/client computer systems, Morphic computer design, Electronic game systems, & Health-care IT.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信