基于模拟开关阵列和低分辨率电流模式ADC的稀疏感知非易失性内存宏计算

Yuxuan Huang, Yifan He, Jinshan Yue, Wenyu Sun, Huazhong Yang, Yongpan Liu
{"title":"基于模拟开关阵列和低分辨率电流模式ADC的稀疏感知非易失性内存宏计算","authors":"Yuxuan Huang, Yifan He, Jinshan Yue, Wenyu Sun, Huazhong Yang, Yongpan Liu","doi":"10.1109/ASP-DAC52403.2022.9712556","DOIUrl":null,"url":null,"abstract":"Non-volatile computing-in-memory (nvCIM) is a novel architecture used for deep neural networks (DNNs) because it can reduce the movement of data between computing units and memory units. As sparsity has made great progress in DNNs, the existing nvCIM architecture is only optimized for structured sparsity but little for unstructured sparsity. To solve this problem, the sparsity-aware nvCIM macro is proposed to improve the computing performance and network classification accuracy, and to support both structured and unstructured sparsity. First, the analog switch array is used to take advantage of the structured sparsity and to improve the computing parallelism. Second, the low-resolution current-mode analog-to-digital converter (CMADC) is designed to optimize the unstructured sparsity. Experimental results show that the peak equivalent energy efficiency of the proposed nvCIM macro is 9.1 TOPS/W (A8W8, 8-bit activations and 8-bit weights) with only 0.51% accuracy loss, and 584.9 TOPS/W (A1W1), which is 4.8 - $7.5\\times$ compared to the state-of-the-art nvCIM macros.","PeriodicalId":239260,"journal":{"name":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sparsity-Aware Non-Volatile Computing-In-Memory Macro with Analog Switch Array and Low-Resolution Current-Mode ADC\",\"authors\":\"Yuxuan Huang, Yifan He, Jinshan Yue, Wenyu Sun, Huazhong Yang, Yongpan Liu\",\"doi\":\"10.1109/ASP-DAC52403.2022.9712556\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Non-volatile computing-in-memory (nvCIM) is a novel architecture used for deep neural networks (DNNs) because it can reduce the movement of data between computing units and memory units. As sparsity has made great progress in DNNs, the existing nvCIM architecture is only optimized for structured sparsity but little for unstructured sparsity. To solve this problem, the sparsity-aware nvCIM macro is proposed to improve the computing performance and network classification accuracy, and to support both structured and unstructured sparsity. First, the analog switch array is used to take advantage of the structured sparsity and to improve the computing parallelism. Second, the low-resolution current-mode analog-to-digital converter (CMADC) is designed to optimize the unstructured sparsity. Experimental results show that the peak equivalent energy efficiency of the proposed nvCIM macro is 9.1 TOPS/W (A8W8, 8-bit activations and 8-bit weights) with only 0.51% accuracy loss, and 584.9 TOPS/W (A1W1), which is 4.8 - $7.5\\\\times$ compared to the state-of-the-art nvCIM macros.\",\"PeriodicalId\":239260,\"journal\":{\"name\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASP-DAC52403.2022.9712556\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC52403.2022.9712556","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

非易失性内存计算(nvCIM)是一种用于深度神经网络(dnn)的新架构,因为它可以减少计算单元和存储单元之间的数据移动。由于稀疏性在深度神经网络中取得了很大的进步,现有的nvCIM架构只针对结构化稀疏性进行了优化,而对非结构化稀疏性的优化很少。为了解决这一问题,提出了稀疏感知的nvCIM宏,以提高计算性能和网络分类精度,同时支持结构化和非结构化稀疏性。首先,利用模拟开关阵列的结构稀疏性,提高计算并行性;其次,设计了低分辨率电流模模数转换器(CMADC)来优化非结构稀疏性。实验结果表明,所提出的nvCIM宏的峰值等效能量效率为9.1 TOPS/W (A8W8, 8位激活和8位权值),精度损失仅为0.51%;峰值等效能量效率为584.9 TOPS/W (A1W1),是最先进的nvCIM宏的4.8 - 7.5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sparsity-Aware Non-Volatile Computing-In-Memory Macro with Analog Switch Array and Low-Resolution Current-Mode ADC
Non-volatile computing-in-memory (nvCIM) is a novel architecture used for deep neural networks (DNNs) because it can reduce the movement of data between computing units and memory units. As sparsity has made great progress in DNNs, the existing nvCIM architecture is only optimized for structured sparsity but little for unstructured sparsity. To solve this problem, the sparsity-aware nvCIM macro is proposed to improve the computing performance and network classification accuracy, and to support both structured and unstructured sparsity. First, the analog switch array is used to take advantage of the structured sparsity and to improve the computing parallelism. Second, the low-resolution current-mode analog-to-digital converter (CMADC) is designed to optimize the unstructured sparsity. Experimental results show that the peak equivalent energy efficiency of the proposed nvCIM macro is 9.1 TOPS/W (A8W8, 8-bit activations and 8-bit weights) with only 0.51% accuracy loss, and 584.9 TOPS/W (A1W1), which is 4.8 - $7.5\times$ compared to the state-of-the-art nvCIM macros.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信