增强量化忆阻器神经网络的现场更新:连体网络学习法

IF 3.1 3区 工程技术 Q2 NEUROSCIENCES
Jinpei Tan, Fengyun Zhang, Jiening Wu, Li Luo, Shukai Duan, Lidan Wang
{"title":"增强量化忆阻器神经网络的现场更新:连体网络学习法","authors":"Jinpei Tan, Fengyun Zhang, Jiening Wu, Li Luo, Shukai Duan, Lidan Wang","doi":"10.1007/s11571-024-10069-1","DOIUrl":null,"url":null,"abstract":"<p>Brain-inspired neuromorphic computing has emerged as a promising solution to overcome the energy and speed limitations of conventional von Neumann architectures. In this context, in-memory computing utilizing memristors has gained attention as a key technology, harnessing their non-volatile characteristics to replicate synaptic behavior akin to the human brain. However, challenges arise from non-linearities, asymmetries, and device variations in memristive devices during synaptic weight updates, leading to inaccurate weight adjustments and diminished recognition accuracy. Moreover, the repetitive weight updates pose endurance challenges for these devices, adversely affecting latency and energy consumption. To address these issues, we propose a Siamese network learning approach to optimize the training of multi-level memristor neural networks. During neural inference, forward propagation takes place within the memristor neural network, enabling error and noise detection in the memristive devices and hardware circuits. Simultaneously, high-precision gradient computation occurs on the software side, initially updating the floating-point weights within the Siamese network with gradients. Subsequently, weight quantization is performed, and the memristor conductance values requiring updates are modified using a sparse update strategy. Additionally, we introduce gradient accumulation and weight quantization error compensation to further enhance network performance. The experimental results of MNIST data recognition, whether based on a MLP or a CNN model, demonstrate the rapid convergence of our network model. Moreover, our method successfully eliminates over 98% of weight updates for memristor conductance weights within a single epoch. This substantial reduction in weight updates leads to a significant decrease in energy consumption and time delay by more than 98% when compared to the basic closed-loop update method. Consequently, this approach effectively addresses the durability requirements of memristive devices.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"12 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing in-situ updates of quantized memristor neural networks: a Siamese network learning approach\",\"authors\":\"Jinpei Tan, Fengyun Zhang, Jiening Wu, Li Luo, Shukai Duan, Lidan Wang\",\"doi\":\"10.1007/s11571-024-10069-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Brain-inspired neuromorphic computing has emerged as a promising solution to overcome the energy and speed limitations of conventional von Neumann architectures. In this context, in-memory computing utilizing memristors has gained attention as a key technology, harnessing their non-volatile characteristics to replicate synaptic behavior akin to the human brain. However, challenges arise from non-linearities, asymmetries, and device variations in memristive devices during synaptic weight updates, leading to inaccurate weight adjustments and diminished recognition accuracy. Moreover, the repetitive weight updates pose endurance challenges for these devices, adversely affecting latency and energy consumption. To address these issues, we propose a Siamese network learning approach to optimize the training of multi-level memristor neural networks. During neural inference, forward propagation takes place within the memristor neural network, enabling error and noise detection in the memristive devices and hardware circuits. Simultaneously, high-precision gradient computation occurs on the software side, initially updating the floating-point weights within the Siamese network with gradients. Subsequently, weight quantization is performed, and the memristor conductance values requiring updates are modified using a sparse update strategy. Additionally, we introduce gradient accumulation and weight quantization error compensation to further enhance network performance. The experimental results of MNIST data recognition, whether based on a MLP or a CNN model, demonstrate the rapid convergence of our network model. Moreover, our method successfully eliminates over 98% of weight updates for memristor conductance weights within a single epoch. This substantial reduction in weight updates leads to a significant decrease in energy consumption and time delay by more than 98% when compared to the basic closed-loop update method. Consequently, this approach effectively addresses the durability requirements of memristive devices.</p>\",\"PeriodicalId\":10500,\"journal\":{\"name\":\"Cognitive Neurodynamics\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Neurodynamics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s11571-024-10069-1\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Neurodynamics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11571-024-10069-1","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

受大脑启发的神经形态计算已成为克服传统冯-诺依曼架构的能量和速度限制的一种有前途的解决方案。在此背景下,利用忆阻器的内存计算作为一项关键技术备受关注,它利用忆阻器的非易失性特性复制了类似人脑的突触行为。然而,在突触权重更新过程中,忆阻器设备的非线性、不对称性和设备变化带来了挑战,导致权重调整不准确和识别准确性降低。此外,重复的权重更新对这些设备的耐用性提出了挑战,对延迟和能耗产生了不利影响。为了解决这些问题,我们提出了一种连体网络学习方法,以优化多级忆阻器神经网络的训练。在神经推理过程中,前向传播发生在忆阻器神经网络中,从而实现了对忆阻器器件和硬件电路的错误和噪声检测。与此同时,高精度梯度计算在软件端进行,最初用梯度更新暹罗神经网络中的浮点权重。随后进行权重量化,并使用稀疏更新策略修改需要更新的忆阻器电导值。此外,我们还引入了梯度累积和权重量化误差补偿,以进一步提高网络性能。无论是基于 MLP 还是 CNN 模型,MNIST 数据识别的实验结果都证明了我们的网络模型收敛迅速。此外,我们的方法成功地在单个历时内消除了超过 98% 的忆阻器电导权重更新。与基本的闭环更新方法相比,权重更新的大幅减少使能耗和时间延迟显著降低了 98% 以上。因此,这种方法能有效满足忆阻器的耐用性要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Enhancing in-situ updates of quantized memristor neural networks: a Siamese network learning approach

Enhancing in-situ updates of quantized memristor neural networks: a Siamese network learning approach

Brain-inspired neuromorphic computing has emerged as a promising solution to overcome the energy and speed limitations of conventional von Neumann architectures. In this context, in-memory computing utilizing memristors has gained attention as a key technology, harnessing their non-volatile characteristics to replicate synaptic behavior akin to the human brain. However, challenges arise from non-linearities, asymmetries, and device variations in memristive devices during synaptic weight updates, leading to inaccurate weight adjustments and diminished recognition accuracy. Moreover, the repetitive weight updates pose endurance challenges for these devices, adversely affecting latency and energy consumption. To address these issues, we propose a Siamese network learning approach to optimize the training of multi-level memristor neural networks. During neural inference, forward propagation takes place within the memristor neural network, enabling error and noise detection in the memristive devices and hardware circuits. Simultaneously, high-precision gradient computation occurs on the software side, initially updating the floating-point weights within the Siamese network with gradients. Subsequently, weight quantization is performed, and the memristor conductance values requiring updates are modified using a sparse update strategy. Additionally, we introduce gradient accumulation and weight quantization error compensation to further enhance network performance. The experimental results of MNIST data recognition, whether based on a MLP or a CNN model, demonstrate the rapid convergence of our network model. Moreover, our method successfully eliminates over 98% of weight updates for memristor conductance weights within a single epoch. This substantial reduction in weight updates leads to a significant decrease in energy consumption and time delay by more than 98% when compared to the basic closed-loop update method. Consequently, this approach effectively addresses the durability requirements of memristive devices.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Neurodynamics
Cognitive Neurodynamics 医学-神经科学
CiteScore
6.90
自引率
18.90%
发文量
140
审稿时长
12 months
期刊介绍: Cognitive Neurodynamics provides a unique forum of communication and cooperation for scientists and engineers working in the field of cognitive neurodynamics, intelligent science and applications, bridging the gap between theory and application, without any preference for pure theoretical, experimental or computational models. The emphasis is to publish original models of cognitive neurodynamics, novel computational theories and experimental results. In particular, intelligent science inspired by cognitive neuroscience and neurodynamics is also very welcome. The scope of Cognitive Neurodynamics covers cognitive neuroscience, neural computation based on dynamics, computer science, intelligent science as well as their interdisciplinary applications in the natural and engineering sciences. Papers that are appropriate for non-specialist readers are encouraged. 1. There is no page limit for manuscripts submitted to Cognitive Neurodynamics. Research papers should clearly represent an important advance of especially broad interest to researchers and technologists in neuroscience, biophysics, BCI, neural computer and intelligent robotics. 2. Cognitive Neurodynamics also welcomes brief communications: short papers reporting results that are of genuinely broad interest but that for one reason and another do not make a sufficiently complete story to justify a full article publication. Brief Communications should consist of approximately four manuscript pages. 3. Cognitive Neurodynamics publishes review articles in which a specific field is reviewed through an exhaustive literature survey. There are no restrictions on the number of pages. Review articles are usually invited, but submitted reviews will also be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信