Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference最新文献

筛选
英文 中文
Easy and efficient spike-based Machine Learning with mlGeNN 简单高效的基于峰值的mlGeNN机器学习
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2023-04-11 DOI: 10.1145/3584954.3585001
James C. Knight, T. Nowotny
{"title":"Easy and efficient spike-based Machine Learning with mlGeNN","authors":"James C. Knight, T. Nowotny","doi":"10.1145/3584954.3585001","DOIUrl":"https://doi.org/10.1145/3584954.3585001","url":null,"abstract":"Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of machine learning with artificial neural networks. Building on our recent works translating ANNs to SNNs and directly training classifiers with e-prop, we here present the mlGeNN interface as an easy way to define, train and test spiking neural networks on our efficient GPU based GeNN framework. We illustrate the use of mlGeNN by investigating the performance of a number of one and two layer recurrent spiking neural networks trained to recognise hand gestures from the DVS gesture dataset with the e-prop learning rule. We find that not only is mlGeNN vastly more convenient to use than the lower level PyGeNN interface, the new freedom to effortlessly and rapidly prototype different network architectures also gave us an unprecedented overview over how e-prop compares to other recently published results on the DVS gesture dataset across architectural details.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116866117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SupportHDC: Hyperdimensional Computing with Scalable Hypervector Sparsity 支持:具有可伸缩的超向量稀疏的超维计算
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2023-04-11 DOI: 10.1145/3584954.3584961
A. Safa, I. Ocket, F. Catthoor, G. Gielen
{"title":"SupportHDC: Hyperdimensional Computing with Scalable Hypervector Sparsity","authors":"A. Safa, I. Ocket, F. Catthoor, G. Gielen","doi":"10.1145/3584954.3584961","DOIUrl":"https://doi.org/10.1145/3584954.3584961","url":null,"abstract":"Hyperdimensional Computing (HDC) is an emerging brain-inspired machine learning method that is recently gaining much attention for performing tasks such as pattern recognition and bio-signal classification with ultra-low energy and area overheads when implemented in hardware. HDC relies on the encoding of input signals into binary or few-bit Hypervectors (HVs) and performs low-complexity manipulations on HVs in order to classify the input signals. In this context, the sparsity of HVs directly impacts energy consumption, since the sparser the HVs, the more zero-valued computations can be skipped. This short paper introduces SupportHDC, a novel HDC design framework that can jointly optimize system accuracy and sparsity in an automated manner, in order to trade off classification performance and hardware implementation overheads. We illustrate the inner working of the framework on two bio-signal classification tasks: cancer detection and arrhythmia detection. We show that SupportHDC can reach a higher accuracy compared to the conventional splatter-code architectures used in many works, while enabling the system designer to choose the final design solution from the accuracy-sparsity trade-off curve produced by the framework. We release the source code for reproducing our experiments with the hope of being beneficial to future research.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116905586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sigma-Delta Networks for Robot Arm Control 机器人手臂控制的Sigma-Delta网络
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2023-04-11 DOI: 10.1145/3584954.3584964
W. Lawson, Anthony M. Harrison, J. Trafton
{"title":"Sigma-Delta Networks for Robot Arm Control","authors":"W. Lawson, Anthony M. Harrison, J. Trafton","doi":"10.1145/3584954.3584964","DOIUrl":"https://doi.org/10.1145/3584954.3584964","url":null,"abstract":"Our autonomous robot, Bight, can be a reliable teammate that is capable of assisting in performing routine maintenance tasks on a Naval vessel. In this paper, we consider the task of maintaining the electrical panel. A vital first step is putting the robot into the correct position to view all of the parts of the electrical panel. The robot can get close, but the arm of the robot will need to move to where it can see everything. Here, we propose to solve this using a sigma delta spiking network that is trained using deep Q learning. Our approach is able to successfully solve this problem at varying distances. While we show how this works on this specific problem, we believe this approach to be general enough to be applied to any similar problem.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114708235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Goemans-Williamson MAXCUT approximation algorithm on Loihi Loihi上的Goemans-Williamson MAXCUT近似算法
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2023-04-11 DOI: 10.1145/3584954.3584955
Bradley H. Theilman, J. Aimone
{"title":"Goemans-Williamson MAXCUT approximation algorithm on Loihi","authors":"Bradley H. Theilman, J. Aimone","doi":"10.1145/3584954.3584955","DOIUrl":"https://doi.org/10.1145/3584954.3584955","url":null,"abstract":"Approximation algorithms for computationally complex problems are of significant importance in computing as they provide computational guarantees of obtaining practically useful results for otherwise computationally intractable problems. The demonstration of implementing formal approximation algorithms on spiking neuromorphic hardware is a critical step in establishing that neuromorphic computing can offer cost-effective solutions to significant optimization problems while retaining important computational guarantees on the quality of solutions. Here, we demonstrate that the Loihi platform is capable of effectively implementing the Goemans-Williamson (GW) approximation algorithm for MAXCUT, an NP-hard problem that has applications ranging from VLSI design to network analysis. We show that a Loihi implementation of the approximation step of the GW algorithm obtains equivalent maximum cuts of graphs as conventional algorithms, and we describe how different aspects of architecture precision impacts the algorithm performance.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125544032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning 高效持续学习的神经元状态依赖机制
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2023-04-11 DOI: 10.1145/3584954.3584960
A. Daram, D. Kudithipudi
{"title":"NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning","authors":"A. Daram, D. Kudithipudi","doi":"10.1145/3584954.3584960","DOIUrl":"https://doi.org/10.1145/3584954.3584960","url":null,"abstract":"Continual learning (sequential learning of tasks) is challenging for deep neural networks, mainly because of catastrophic forgetting, the tendency for accuracy on previously trained tasks to drop when new tasks are learned. Although several biologically-inspired techniques have been proposed for mitigating catastrophic forgetting, they typically require additional memory and/or computational overhead. Here, we propose a novel regularization approach that combines neuronal activation-based importance measurement with neuron state-dependent learning mechanisms to alleviate catastrophic forgetting in both task-aware and task-agnostic scenarios. We introduce a neuronal state-dependent mechanism driven by neuronal activity traces and selective learning rules, with storage requirements for regularization parameters that grow slower with network size - compared to schemes that calculate weight importance, whose storage grows quadratically. The proposed model, NEO, is able to achieve performance comparable to other state-of-the-art regularization based approaches to catastrophic forgetting, while operating with a reduced memory overhead.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131969219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech2Spikes: Efficient Audio Encoding Pipeline for Real-time Neuromorphic Systems 用于实时神经形态系统的高效音频编码管道
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2023-04-11 DOI: 10.1145/3584954.3584995
Kenneth Michael Stewart, Timothy M. Shea, Noah Pacik-Nelson, Eric M Gallo, A. Danielescu
{"title":"Speech2Spikes: Efficient Audio Encoding Pipeline for Real-time Neuromorphic Systems","authors":"Kenneth Michael Stewart, Timothy M. Shea, Noah Pacik-Nelson, Eric M Gallo, A. Danielescu","doi":"10.1145/3584954.3584995","DOIUrl":"https://doi.org/10.1145/3584954.3584995","url":null,"abstract":"Despite the maturity and availability of speech recognition systems, there are few available spiking speech recognition tasks that can be implemented with current neuromorphic systems. The methods used previously to generate spiking speech data are not capable of encoding speech in real-time or encoding very large modern speech datasets efficiently for input to neuromorphic processors. The ability to efficiently encode audio data to spikes will enable a wider variety of spiking audio datasets to be available and can also enable algorithmic development of real-time neuromorphic automatic speech recognition systems. Therefore, we developed speech2spikes, a simple and efficient audio processing pipeline that encodes recorded audio into spikes and is suitable for real-time operation with low-power neuromorphic processors. To demonstrate the efficacy of our method for audio to spike encoding we show that a small feed-forward spiking neural network trained on data generated with the pipeline achieves accuracy on the Google Speech Commands recognition task, exceeding the state-of-the art set by Spiking Speech Commands, a prior spiking encoding of the Google Speech Commands dataset, by over 10%. We also demonstrate a proof-of-concept real-time neuromorphic automatic speech recognition system using audio encoded with speech2spikes streamed to an Intel Loihi neuromorphic research processor.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132447023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2 hxtorch。snn:基于brainscale的基于机器学习的峰值神经网络建模
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2022-12-23 DOI: 10.1145/3584954.3584993
Philipp Spilger, E. Arnold, Luca Blessing, Christian Mauch, Christian Pehle, Eric Müller, J. Schemmel
{"title":"hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2","authors":"Philipp Spilger, E. Arnold, Luca Blessing, Christian Mauch, Christian Pehle, Eric Müller, J. Schemmel","doi":"10.1145/3584954.3584993","DOIUrl":"https://doi.org/10.1145/3584954.3584993","url":null,"abstract":"Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127425607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
AEStream: Accelerated event-based processing with coroutines AEStream:使用协程加速基于事件的处理
Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference Pub Date : 2022-12-21 DOI: 10.1145/3584954.3584997
Jens Egholm Pedersen, J. Conradt
{"title":"AEStream: Accelerated event-based processing with coroutines","authors":"Jens Egholm Pedersen, J. Conradt","doi":"10.1145/3584954.3584997","DOIUrl":"https://doi.org/10.1145/3584954.3584997","url":null,"abstract":"Neuromorphic sensors imitate the sparse and event-based communication seen in biological sensory organs and brains. Today’s sensors can emit many millions of asynchronous events per second, which is challenging to process on conventional computers. To avoid bottleneck effects, there is a need to apply and improve concurrent and parallel processing of events. We present AEStream: a library to efficiently stream asynchronous events from inputs to outputs on conventional computers. AEStream leverages cooperative multitasking primitives known as coroutines to concurrently process individual events, which dramatically simplifies the integration with event-based peripherals, such as event-based cameras and (neuromorphic) asynchronous hardware. We explore the effects of coroutines in concurrent settings by benchmarking them against conventional threading mechanisms, and find that AEStream provides at least twice the throughput. We then apply AEStream in a real-time edge detection task on a GPU and demonstrate 1.3 times faster processing with 5 times fewer memory operations.","PeriodicalId":375527,"journal":{"name":"Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128510286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信