BRein memory: A 13-layer 4.2 K neuron/0.8 M synapse binary/ternary reconfigurable in-memory deep neural network accelerator in 65 nm CMOS

Kota Ando, Kodai Ueyoshi, Kentaro Orimo, H. Yonekawa, Shimpei Sato, Hiroki Nakahara, M. Ikebe, T. Asai, Shinya Takamaeda-Yamazaki, T. Kuroda, M. Motomura
{"title":"BRein memory: A 13-layer 4.2 K neuron/0.8 M synapse binary/ternary reconfigurable in-memory deep neural network accelerator in 65 nm CMOS","authors":"Kota Ando, Kodai Ueyoshi, Kentaro Orimo, H. Yonekawa, Shimpei Sato, Hiroki Nakahara, M. Ikebe, T. Asai, Shinya Takamaeda-Yamazaki, T. Kuroda, M. Motomura","doi":"10.23919/VLSIC.2017.8008533","DOIUrl":null,"url":null,"abstract":"A versatile reconfigurable accelerator for binary/ternary deep neural networks (DNNs) is presented. It features a massively parallel in-memory processing architecture and stores varieties of binary/ternary DNNs with a maximum of 13 layers, 4.2 K neurons, and 0.8 M synapses on chip. The 0.6 W, 1.4 TOPS chip achieves performance and energy efficiency that is 10–10<sup>2</sup> and 10<sup>2</sup>–10<sup>4</sup> times better than a CPU/GPU/FPGA.","PeriodicalId":176340,"journal":{"name":"2017 Symposium on VLSI Circuits","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"72","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Symposium on VLSI Circuits","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/VLSIC.2017.8008533","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 72

Abstract

A versatile reconfigurable accelerator for binary/ternary deep neural networks (DNNs) is presented. It features a massively parallel in-memory processing architecture and stores varieties of binary/ternary DNNs with a maximum of 13 layers, 4.2 K neurons, and 0.8 M synapses on chip. The 0.6 W, 1.4 TOPS chip achieves performance and energy efficiency that is 10–102 and 102–104 times better than a CPU/GPU/FPGA.
BRein存储器:一种13层4.2 K神经元/0.8 M突触二/三元可重构内存深度神经网络加速器
提出了一种用于二/三元深度神经网络(dnn)的多功能可重构加速器。它具有大规模并行内存处理架构,并在芯片上存储最多13层,4.2 K神经元和0.8 M突触的各种二进制/三元dnn。0.6 W, 1.4 TOPS芯片的性能和能效分别是CPU/GPU/FPGA的10-102倍和102-104倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信