{"title":"XNOR-VSH:一种基于谷自旋霍尔效应的紧凑节能的二元神经网络突触交叉栅阵列","authors":"Karam Cho;Akul Malhotra;Sumeet Kumar Gupta","doi":"10.1109/JXCDC.2023.3320677","DOIUrl":null,"url":null,"abstract":"Binary neural networks (BNNs) have shown an immense promise for resource-constrained edge artificial intelligence (AI) platforms. However, prior designs typically either require two bit-cells to encode signed weights leading to an area overhead, or require complex peripheral circuitry. In this article, we address this issue by proposing a compact and low power in-memory computing (IMC) of XNOR-based dot products featuring signed weight encoding in a single bit-cell. Our approach utilizes valley-spin Hall (VSH) effect in monolayer tungsten di-selenide to design an XNOR bit-cell (named “XNOR-VSH”) with differential storage and access-transistor-less topology. We co-optimize the proposed VSH device and a memory array to enable robust in-memory dot product computations between signed binary inputs and signed binary weights with sense margin (SM)\n<inline-formula> <tex-math>$1 ~\\mu \\text{A}$ </tex-math></inline-formula>\n. Our results show that the proposed XNOR-VSH array achieves 4.8%–9.0% and 37%–63% lower IMC latency and energy, respectively, with 49%–64% smaller area compared to spin-transfer-torque (STT)-magnetic random access memory (MRAM) and spin-orbit-torque (SOT)-MRAM based XNOR-arrays. We also present the impact of hardware non-idealities and process variations in XNOR-VSH on system-level accuracy for the trained ResNet-18 BNNs using the CIFAR-10 dataset.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10268108","citationCount":"0","resultStr":"{\"title\":\"XNOR-VSH: A Valley-Spin Hall Effect-Based Compact and Energy-Efficient Synaptic Crossbar Array for Binary Neural Networks\",\"authors\":\"Karam Cho;Akul Malhotra;Sumeet Kumar Gupta\",\"doi\":\"10.1109/JXCDC.2023.3320677\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Binary neural networks (BNNs) have shown an immense promise for resource-constrained edge artificial intelligence (AI) platforms. However, prior designs typically either require two bit-cells to encode signed weights leading to an area overhead, or require complex peripheral circuitry. In this article, we address this issue by proposing a compact and low power in-memory computing (IMC) of XNOR-based dot products featuring signed weight encoding in a single bit-cell. Our approach utilizes valley-spin Hall (VSH) effect in monolayer tungsten di-selenide to design an XNOR bit-cell (named “XNOR-VSH”) with differential storage and access-transistor-less topology. We co-optimize the proposed VSH device and a memory array to enable robust in-memory dot product computations between signed binary inputs and signed binary weights with sense margin (SM)\\n<inline-formula> <tex-math>$1 ~\\\\mu \\\\text{A}$ </tex-math></inline-formula>\\n. Our results show that the proposed XNOR-VSH array achieves 4.8%–9.0% and 37%–63% lower IMC latency and energy, respectively, with 49%–64% smaller area compared to spin-transfer-torque (STT)-magnetic random access memory (MRAM) and spin-orbit-torque (SOT)-MRAM based XNOR-arrays. We also present the impact of hardware non-idealities and process variations in XNOR-VSH on system-level accuracy for the trained ResNet-18 BNNs using the CIFAR-10 dataset.\",\"PeriodicalId\":54149,\"journal\":{\"name\":\"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2023-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10268108\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10268108/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10268108/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
XNOR-VSH: A Valley-Spin Hall Effect-Based Compact and Energy-Efficient Synaptic Crossbar Array for Binary Neural Networks
Binary neural networks (BNNs) have shown an immense promise for resource-constrained edge artificial intelligence (AI) platforms. However, prior designs typically either require two bit-cells to encode signed weights leading to an area overhead, or require complex peripheral circuitry. In this article, we address this issue by proposing a compact and low power in-memory computing (IMC) of XNOR-based dot products featuring signed weight encoding in a single bit-cell. Our approach utilizes valley-spin Hall (VSH) effect in monolayer tungsten di-selenide to design an XNOR bit-cell (named “XNOR-VSH”) with differential storage and access-transistor-less topology. We co-optimize the proposed VSH device and a memory array to enable robust in-memory dot product computations between signed binary inputs and signed binary weights with sense margin (SM)
$1 ~\mu \text{A}$
. Our results show that the proposed XNOR-VSH array achieves 4.8%–9.0% and 37%–63% lower IMC latency and energy, respectively, with 49%–64% smaller area compared to spin-transfer-torque (STT)-magnetic random access memory (MRAM) and spin-orbit-torque (SOT)-MRAM based XNOR-arrays. We also present the impact of hardware non-idealities and process variations in XNOR-VSH on system-level accuracy for the trained ResNet-18 BNNs using the CIFAR-10 dataset.