Energy-Efficient CNN Accelerator Using Voltage-Gated DSHE-MRAM

IF 2.9 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Gaurav Verma;Sandeep Soni;Arshid Nisar;Seema Dhull;Brajesh Kumar Kaushik
{"title":"Energy-Efficient CNN Accelerator Using Voltage-Gated DSHE-MRAM","authors":"Gaurav Verma;Sandeep Soni;Arshid Nisar;Seema Dhull;Brajesh Kumar Kaushik","doi":"10.1109/TED.2025.3537592","DOIUrl":null,"url":null,"abstract":"Modern convolutional neural networks (CNNs) architectures need millions of parameters to be stored and computed in hardware for image classification. This needs a huge amount of memory and power in conventional hardware accelerator architectures. The concept of neuromorphic and in-memory computing (IMC) using emerging memory technologies has led to energy-efficient computations for hardware accelerators. This work presents a voltage-gated dual-bit spin Hall effect (VG-DSHE) magnetic random access memory (MRAM) device-based accelerator. The VG-DSHE-MRAM provides efficiency in terms of power and speed as compared to other MRAM devices. A crossbar array is implemented using VG-DSHE devices to exploit high-density storage, energy-efficiency, and fast multiply and accumulate (MAC) computation. Finally, a complete hardware implementation of CNN architecture is presented for image classification using the CIFAR-10 dataset without any significant accuracy degradation. The proposed VG-DSHE-based CNN accelerator is <inline-formula> <tex-math>$2.1\\times $ </tex-math></inline-formula> more energy-efficient as compared to conventional differential spin Hall effect (DSHE)-based designs and achieves a throughput efficiency of 1.57, 0.49, 0.035, and 0.018 TSOPS/W for VGG8, VGG16, AlexNet, and ResNet18 architectures, respectively. Furthermore, the proposed accelerator is compared with other emerging memories for AlexNet architecture that shows <inline-formula> <tex-math>$181\\times $ </tex-math></inline-formula>, <inline-formula> <tex-math>$14.76\\times $ </tex-math></inline-formula>, <inline-formula> <tex-math>$2.4\\times $ </tex-math></inline-formula>, <inline-formula> <tex-math>$2.25\\times $ </tex-math></inline-formula>, and <inline-formula> <tex-math>$2.1\\times $ </tex-math></inline-formula> improvement in crossbar power consumption as compared to phase change memory (PCM), resistive random access memory (RRAM), spin transfer torque (STT), spin-orbit torque (SOT), and DSHE, respectively.","PeriodicalId":13092,"journal":{"name":"IEEE Transactions on Electron Devices","volume":"72 4","pages":"1715-1722"},"PeriodicalIF":2.9000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Electron Devices","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10880481/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Modern convolutional neural networks (CNNs) architectures need millions of parameters to be stored and computed in hardware for image classification. This needs a huge amount of memory and power in conventional hardware accelerator architectures. The concept of neuromorphic and in-memory computing (IMC) using emerging memory technologies has led to energy-efficient computations for hardware accelerators. This work presents a voltage-gated dual-bit spin Hall effect (VG-DSHE) magnetic random access memory (MRAM) device-based accelerator. The VG-DSHE-MRAM provides efficiency in terms of power and speed as compared to other MRAM devices. A crossbar array is implemented using VG-DSHE devices to exploit high-density storage, energy-efficiency, and fast multiply and accumulate (MAC) computation. Finally, a complete hardware implementation of CNN architecture is presented for image classification using the CIFAR-10 dataset without any significant accuracy degradation. The proposed VG-DSHE-based CNN accelerator is $2.1\times $ more energy-efficient as compared to conventional differential spin Hall effect (DSHE)-based designs and achieves a throughput efficiency of 1.57, 0.49, 0.035, and 0.018 TSOPS/W for VGG8, VGG16, AlexNet, and ResNet18 architectures, respectively. Furthermore, the proposed accelerator is compared with other emerging memories for AlexNet architecture that shows $181\times $ , $14.76\times $ , $2.4\times $ , $2.25\times $ , and $2.1\times $ improvement in crossbar power consumption as compared to phase change memory (PCM), resistive random access memory (RRAM), spin transfer torque (STT), spin-orbit torque (SOT), and DSHE, respectively.
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Electron Devices
IEEE Transactions on Electron Devices 工程技术-工程:电子与电气
CiteScore
5.80
自引率
16.10%
发文量
937
审稿时长
3.8 months
期刊介绍: IEEE Transactions on Electron Devices publishes original and significant contributions relating to the theory, modeling, design, performance and reliability of electron and ion integrated circuit devices and interconnects, involving insulators, metals, organic materials, micro-plasmas, semiconductors, quantum-effect structures, vacuum devices, and emerging materials with applications in bioelectronics, biomedical electronics, computation, communications, displays, microelectromechanics, imaging, micro-actuators, nanoelectronics, optoelectronics, photovoltaics, power ICs and micro-sensors. Tutorial and review papers on these subjects are also published and occasional special issues appear to present a collection of papers which treat particular areas in more depth and breadth.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信