Alex Henderson, C. Yakopcic, Steven Harbour, Tarek Taha, Cory E. Merkel, Hananel Hazan
{"title":"基于鲁棒忆阻器的液体状态机高效非原位训练电路优化技术","authors":"Alex Henderson, C. Yakopcic, Steven Harbour, Tarek Taha, Cory E. Merkel, Hananel Hazan","doi":"10.1145/3565478.3572542","DOIUrl":null,"url":null,"abstract":"Spiking neural network hardware offers a high performance, power-efficient and robust platform for the processing of complex data. Many of these systems require supervised learning, which poses a challenge when using gradient-based algorithms due to the discontinuous properties of SNNs. Memristor based hardware can offer gains in portability, power reduction, and throughput efficiency when compared to pure CMOS. This paper proposes a memristor-based spiking liquid state machine (LSM). The inherent dynamics of the LSM permit the use of supervised learning without backpropagation for weight updates. To carry out the design space evaluation of the LSM for optimal hardware performance, several temporal signal classification tasks are performed. It is found that the binary neuron activations in the output layer improve testing accuracy by 3.7% and 5% for classification, while reducing training time. A power and energy analysis of the proposed hardware is presented, resulting in an approximately 50% reduction in power consumption and cycle energy.","PeriodicalId":125590,"journal":{"name":"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures","volume":"2 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Circuit Optimization Techniques for Efficient Ex-Situ Training of Robust Memristor Based Liquid State Machine\",\"authors\":\"Alex Henderson, C. Yakopcic, Steven Harbour, Tarek Taha, Cory E. Merkel, Hananel Hazan\",\"doi\":\"10.1145/3565478.3572542\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Spiking neural network hardware offers a high performance, power-efficient and robust platform for the processing of complex data. Many of these systems require supervised learning, which poses a challenge when using gradient-based algorithms due to the discontinuous properties of SNNs. Memristor based hardware can offer gains in portability, power reduction, and throughput efficiency when compared to pure CMOS. This paper proposes a memristor-based spiking liquid state machine (LSM). The inherent dynamics of the LSM permit the use of supervised learning without backpropagation for weight updates. To carry out the design space evaluation of the LSM for optimal hardware performance, several temporal signal classification tasks are performed. It is found that the binary neuron activations in the output layer improve testing accuracy by 3.7% and 5% for classification, while reducing training time. A power and energy analysis of the proposed hardware is presented, resulting in an approximately 50% reduction in power consumption and cycle energy.\",\"PeriodicalId\":125590,\"journal\":{\"name\":\"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures\",\"volume\":\"2 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3565478.3572542\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 17th ACM International Symposium on Nanoscale Architectures","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3565478.3572542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Circuit Optimization Techniques for Efficient Ex-Situ Training of Robust Memristor Based Liquid State Machine
Spiking neural network hardware offers a high performance, power-efficient and robust platform for the processing of complex data. Many of these systems require supervised learning, which poses a challenge when using gradient-based algorithms due to the discontinuous properties of SNNs. Memristor based hardware can offer gains in portability, power reduction, and throughput efficiency when compared to pure CMOS. This paper proposes a memristor-based spiking liquid state machine (LSM). The inherent dynamics of the LSM permit the use of supervised learning without backpropagation for weight updates. To carry out the design space evaluation of the LSM for optimal hardware performance, several temporal signal classification tasks are performed. It is found that the binary neuron activations in the output layer improve testing accuracy by 3.7% and 5% for classification, while reducing training time. A power and energy analysis of the proposed hardware is presented, resulting in an approximately 50% reduction in power consumption and cycle energy.