Considerations of Integrating Computing-In-Memory and Processing-In-Sensor into Convolutional Neural Network Accelerators for Low-Power Edge Devices

K. Tang, Wei-Chen Wei, Zuo-Wei Yeh, Tzu-Hsiang Hsu, Yen-Cheng Chiu, Cheng-Xin Xue, Yu-Chun Kuo, Tai-Hsing We, M. Ho, C. Lo, Ren-Shuo Liu, C. Hsieh, Meng-Fan Chang
{"title":"Considerations of Integrating Computing-In-Memory and Processing-In-Sensor into Convolutional Neural Network Accelerators for Low-Power Edge Devices","authors":"K. Tang, Wei-Chen Wei, Zuo-Wei Yeh, Tzu-Hsiang Hsu, Yen-Cheng Chiu, Cheng-Xin Xue, Yu-Chun Kuo, Tai-Hsing We, M. Ho, C. Lo, Ren-Shuo Liu, C. Hsieh, Meng-Fan Chang","doi":"10.23919/VLSIT.2019.8776560","DOIUrl":null,"url":null,"abstract":"In quest to execute emerging deep learning algorithms at edge devices, developing low-power and low-latency deep learning accelerators (DLAs) have become top priority. To achieve this goal, data processing techniques in sensor and memory utilizing the array structure have drawn much attention. Processing-in-sensor (PIS) solutions could reduce data transfer; computing-in-memory (CIM) macros could reduce memory access and intermediate data movement. We propose a new architecture to integrate PIS and CIM to realize low-power DLA. The advantages of using these techniques and the challenges from system point-of-view are discussed.","PeriodicalId":6752,"journal":{"name":"2019 Symposium on VLSI Technology","volume":"33 1","pages":"T166-T167"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Symposium on VLSI Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/VLSIT.2019.8776560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

In quest to execute emerging deep learning algorithms at edge devices, developing low-power and low-latency deep learning accelerators (DLAs) have become top priority. To achieve this goal, data processing techniques in sensor and memory utilizing the array structure have drawn much attention. Processing-in-sensor (PIS) solutions could reduce data transfer; computing-in-memory (CIM) macros could reduce memory access and intermediate data movement. We propose a new architecture to integrate PIS and CIM to realize low-power DLA. The advantages of using these techniques and the challenges from system point-of-view are discussed.
低功耗边缘器件卷积神经网络加速器中内存计算和传感器处理集成的思考
为了实现这一目标,利用阵列结构的传感器和存储器的数据处理技术引起了人们的广泛关注。传感器内处理(PIS)解决方案可以减少数据传输;内存中计算(CIM)宏可以减少内存访问和中间数据移动。我们提出了一种集成PIS和CIM的新架构,以实现低功耗DLA。从系统的角度讨论了这些技术的优点和面临的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信