Binarized Neural-Network Parallel-Processing Accelerator Macro Designed for an Energy Efficiency Higher Than 100 TOPS/W

IF 2 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yusaku Shiotsu;Satoshi Sugahara
{"title":"Binarized Neural-Network Parallel-Processing Accelerator Macro Designed for an Energy Efficiency Higher Than 100 TOPS/W","authors":"Yusaku Shiotsu;Satoshi Sugahara","doi":"10.1109/JXCDC.2025.3538702","DOIUrl":null,"url":null,"abstract":"A binarized neural-network (BNN) accelerator macro is developed based on a processing-in-memory (PIM) architecture having the ability of eight-parallel multiply-accumulate (MAC) processing. The parallel-processing PIM macro, referred to as a PPIM macro, is designed to perform the parallel processing with no use of multiport SRAM cells and to achieve the energy minimum point (EMP) operation for inference. The proposed memory array in the PPIM macro is configured with single-port Schmitt-trigger-type cells just by adding multiple bit lines with spatial address mapping modulation, resulting in a highly area-efficient cell array. The EMP operation of the developed PPIM macro can maximize the energy efficiency. As a result, an energy efficiency higher than 100 tera-operations-per-second per Watt (TOPS/W) can be achieved at around the EMP voltage. The EMP operation is also beneficial for enhancing the processing performance [measured in units of tera-operations per second (TOPS)] of the macro. The performance of fully connected-layer (FCL) networks configured with a multiple of the PPIM macro is also demonstrated.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"11 ","pages":"25-33"},"PeriodicalIF":2.0000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10870055","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10870055/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

A binarized neural-network (BNN) accelerator macro is developed based on a processing-in-memory (PIM) architecture having the ability of eight-parallel multiply-accumulate (MAC) processing. The parallel-processing PIM macro, referred to as a PPIM macro, is designed to perform the parallel processing with no use of multiport SRAM cells and to achieve the energy minimum point (EMP) operation for inference. The proposed memory array in the PPIM macro is configured with single-port Schmitt-trigger-type cells just by adding multiple bit lines with spatial address mapping modulation, resulting in a highly area-efficient cell array. The EMP operation of the developed PPIM macro can maximize the energy efficiency. As a result, an energy efficiency higher than 100 tera-operations-per-second per Watt (TOPS/W) can be achieved at around the EMP voltage. The EMP operation is also beneficial for enhancing the processing performance [measured in units of tera-operations per second (TOPS)] of the macro. The performance of fully connected-layer (FCL) networks configured with a multiple of the PPIM macro is also demonstrated.
二值化神经网络并行处理加速器宏设计的能源效率高于100 TOPS/W
基于内存处理(PIM)架构,开发了一种二值化神经网络(BNN)加速器宏,该宏具有8并行乘法累加(MAC)处理能力。并行处理PIM宏(简称PIM宏)的设计目的是在不使用多端口SRAM单元的情况下执行并行处理,并实现推理的能量最小点(EMP)操作。在PPIM宏中提出的存储阵列通过添加多个具有空间地址映射调制的位线来配置单端口施密特触发型单元,从而形成一个高效的区域效率单元阵列。所开发的PPIM宏的EMP操作可以最大限度地提高能源效率。因此,在EMP电压附近,可以实现高于100兆位/瓦特每秒(TOPS/W)的能源效率。EMP操作也有利于提高宏的处理性能(以每秒太次操作(TOPS)为单位)。还演示了使用多个PPIM宏配置的全连接层(FCL)网络的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.00
自引率
4.20%
发文量
11
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信