基于鲁棒忆阻器的液体状态机高效非原位训练电路优化技术

Alex Henderson, C. Yakopcic, Steven Harbour, Tarek Taha, Cory E. Merkel, Hananel Hazan
{"title":"基于鲁棒忆阻器的液体状态机高效非原位训练电路优化技术","authors":"Alex Henderson, C. Yakopcic, Steven Harbour, Tarek Taha, Cory E. Merkel, Hananel Hazan","doi":"10.1145/3565478.3572542","DOIUrl":null,"url":null,"abstract":"Spiking neural network hardware offers a high performance, power-efficient and robust platform for the processing of complex data. Many of these systems require supervised learning, which poses a challenge when using gradient-based algorithms due to the discontinuous properties of SNNs. Memristor based hardware can offer gains in portability, power reduction, and throughput efficiency when compared to pure CMOS. This paper proposes a memristor-based spiking liquid state machine (LSM). The inherent dynamics of the LSM permit the use of supervised learning without backpropagation for weight updates. To carry out the design space evaluation of the LSM for optimal hardware performance, several temporal signal classification tasks are performed. It is found that the binary neuron activations in the output layer improve testing accuracy by 3.7% and 5% for classification, while reducing training time. A power and energy analysis of the proposed hardware is presented, resulting in an approximately 50% reduction in power consumption and cycle energy.","PeriodicalId":125590,"journal":{"name":"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures","volume":"2 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Circuit Optimization Techniques for Efficient Ex-Situ Training of Robust Memristor Based Liquid State Machine\",\"authors\":\"Alex Henderson, C. Yakopcic, Steven Harbour, Tarek Taha, Cory E. Merkel, Hananel Hazan\",\"doi\":\"10.1145/3565478.3572542\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Spiking neural network hardware offers a high performance, power-efficient and robust platform for the processing of complex data. Many of these systems require supervised learning, which poses a challenge when using gradient-based algorithms due to the discontinuous properties of SNNs. Memristor based hardware can offer gains in portability, power reduction, and throughput efficiency when compared to pure CMOS. This paper proposes a memristor-based spiking liquid state machine (LSM). The inherent dynamics of the LSM permit the use of supervised learning without backpropagation for weight updates. To carry out the design space evaluation of the LSM for optimal hardware performance, several temporal signal classification tasks are performed. It is found that the binary neuron activations in the output layer improve testing accuracy by 3.7% and 5% for classification, while reducing training time. A power and energy analysis of the proposed hardware is presented, resulting in an approximately 50% reduction in power consumption and cycle energy.\",\"PeriodicalId\":125590,\"journal\":{\"name\":\"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures\",\"volume\":\"2 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3565478.3572542\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3565478.3572542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

脉冲神经网络硬件为复杂数据的处理提供了高性能、高能效和鲁棒性的平台。这些系统中的许多都需要监督学习,由于snn的不连续特性,这在使用基于梯度的算法时提出了挑战。与纯CMOS相比,基于忆阻器的硬件可以提供可移植性,功耗降低和吞吐量效率方面的收益。提出了一种基于记忆电阻器的脉冲式液体状态机(LSM)。LSM固有的动态特性允许使用无反向传播的监督学习进行权重更新。为了对LSM进行设计空间评估以获得最佳硬件性能,执行了几个时间信号分类任务。结果发现,在输出层激活二值神经元后,分类准确率分别提高3.7%和5%,同时减少了训练时间。对所提出的硬件进行了功率和能量分析,从而使功耗和循环能量降低了约50%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Circuit Optimization Techniques for Efficient Ex-Situ Training of Robust Memristor Based Liquid State Machine
Spiking neural network hardware offers a high performance, power-efficient and robust platform for the processing of complex data. Many of these systems require supervised learning, which poses a challenge when using gradient-based algorithms due to the discontinuous properties of SNNs. Memristor based hardware can offer gains in portability, power reduction, and throughput efficiency when compared to pure CMOS. This paper proposes a memristor-based spiking liquid state machine (LSM). The inherent dynamics of the LSM permit the use of supervised learning without backpropagation for weight updates. To carry out the design space evaluation of the LSM for optimal hardware performance, several temporal signal classification tasks are performed. It is found that the binary neuron activations in the output layer improve testing accuracy by 3.7% and 5% for classification, while reducing training time. A power and energy analysis of the proposed hardware is presented, resulting in an approximately 50% reduction in power consumption and cycle energy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信