Neuromorphic Computing and Engineering最新文献

筛选
英文 中文
Hands-on reservoir computing: a tutorial for practical implementation 动手水库计算:教程的实际实施
Neuromorphic Computing and Engineering Pub Date : 2022-07-01 DOI: 10.1088/2634-4386/ac7db7
Matteo Cucchi, Steven Abreu, G. Ciccone, D. Brunner, H. Kleemann
{"title":"Hands-on reservoir computing: a tutorial for practical implementation","authors":"Matteo Cucchi, Steven Abreu, G. Ciccone, D. Brunner, H. Kleemann","doi":"10.1088/2634-4386/ac7db7","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7db7","url":null,"abstract":"This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012 Neural Networks: Tricks of the Trade (Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online 7 7 https://github.com/stevenabreu7/handson_reservoir.. https://github.com/stevenabreu7/handson_reservoir.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114802454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Advantages of binary stochastic synapses for hardware spiking neural networks with realistic memristors 二元随机突触在具有真实忆阻器的硬件尖峰神经网络中的优势
Neuromorphic Computing and Engineering Pub Date : 2022-06-28 DOI: 10.1088/2634-4386/ac7c89
K. Sulinskas, M. Borg
{"title":"Advantages of binary stochastic synapses for hardware spiking neural networks with realistic memristors","authors":"K. Sulinskas, M. Borg","doi":"10.1088/2634-4386/ac7c89","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7c89","url":null,"abstract":"Hardware implementing spiking neural networks (SNNs) has the potential to provide transformative gains in energy efficiency and throughput for energy-restricted machine-learning tasks. This is enabled by large arrays of memristive synapse devices that can be realized by various emerging memory technologies. But in practice, the performance of such hardware is limited by non-ideal features of the memristor devices such as nonlinear and asymmetric state updates, limited bit-resolution, limited cycling endurance and device noise. Here we investigate how stochastic switching in binary synapses can provide advantages compared with realistic analog memristors when using unsupervised training of SNNs via spike timing-dependent plasticity. We find that the performance of binary stochastic SNNs is similar to or even better than analog deterministic SNNs when one considers memristors with realistic bit-resolution as well in situations with considerable cycle-to-cycle noise. Furthermore, binary stochastic SNNs require many fewer weight updates to train, leading to superior utilization of the limited endurance in realistic memristive devices.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114816785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario 简单和复杂的尖峰神经元:简单STDP场景中的观点和分析
Neuromorphic Computing and Engineering Pub Date : 2022-06-28 DOI: 10.1088/2634-4386/ac999b
Davide L. Manna, A. Sola, Paul Kirkland, Trevor J. Bihl, G. D. Caterina
{"title":"Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario","authors":"Davide L. Manna, A. Sola, Paul Kirkland, Trevor J. Bihl, G. D. Caterina","doi":"10.1088/2634-4386/ac999b","DOIUrl":"https://doi.org/10.1088/2634-4386/ac999b","url":null,"abstract":"Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. Among many neuron models, the integrate-and-fire (I&F) models are often adopted, with the simple leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the quadratic I&F (QIF) and the exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with spike-timing dependent plasticity (STDP) on a classification task on the N-MNIST and DVS gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system’s performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"13 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133774141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fluctuation-driven initialization for spiking neural network training 波动驱动的尖峰神经网络初始化训练
Neuromorphic Computing and Engineering Pub Date : 2022-06-21 DOI: 10.1088/2634-4386/ac97bb
Julian Rossbroich, Julia Gygax, F T Zenke
{"title":"Fluctuation-driven initialization for spiking neural network training","authors":"Julian Rossbroich, Julia Gygax, F T Zenke","doi":"10.1088/2634-4386/ac97bb","DOIUrl":"https://doi.org/10.1088/2634-4386/ac97bb","url":null,"abstract":"Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. However, instantiating SNNs that solve complex computational tasks in-silico remains a significant challenge. Surrogate gradient (SG) techniques have emerged as a standard solution for training SNNs end-to-end. Still, their success depends on synaptic weight initialization, similar to conventional artificial neural networks (ANNs). Yet, unlike in the case of ANNs, it remains elusive what constitutes a good initial state for an SNN. Here, we develop a general initialization strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain. Specifically, we derive practical solutions for data-dependent weight initialization that ensure fluctuation-driven firing in the widely used leaky integrate-and-fire neurons. We empirically show that SNNs initialized following our strategy exhibit superior learning performance when trained with SGs. These findings generalize across several datasets and SNN architectures, including fully connected, deep convolutional, recurrent, and more biologically plausible SNNs obeying Dale’s law. Thus fluctuation-driven initialization provides a practical, versatile, and easy-to-implement strategy for improving SNN training performance on diverse tasks in neuromorphic engineering and computational neuroscience.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127786017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
2022 roadmap on neuromorphic devices and applications research in China 2022中国神经形态器件及应用研究路线图
Neuromorphic Computing and Engineering Pub Date : 2022-06-20 DOI: 10.1088/2634-4386/ac7a5a
Qing Wan, C. Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Pengcheng Zhou, Lin Chen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, L. Zhu, Jian‐yu Du, Chengqiang Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su‐Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lun Wang, Yinshui Xia, Chen Mu, F. Lin, Chixiao Chen, Bo Cheng, Y. Xing, W. Zeng, Hong Chen, Lei Yu, G. Indiveri, Ning Qiao
{"title":"2022 roadmap on neuromorphic devices and applications research in China","authors":"Qing Wan, C. Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Pengcheng Zhou, Lin Chen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, L. Zhu, Jian‐yu Du, Chengqiang Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su‐Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lun Wang, Yinshui Xia, Chen Mu, F. Lin, Chixiao Chen, Bo Cheng, Y. Xing, W. Zeng, Hong Chen, Lei Yu, G. Indiveri, Ning Qiao","doi":"10.1088/2634-4386/ac7a5a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7a5a","url":null,"abstract":"The data throughput in the von Neumann architecture-based computing system is limited by its separated processing and memory structure, and the mismatching speed between the two units. As a result, it is quite difficult to improve the energy efficiency in conventional computing system, especially for dealing with unstructured data. Meanwhile, artificial intelligence and robotics nowadays still behave poorly in autonomy, creativity, and sociality, which has been considered as the unimaginable computational requirement for sensorimotor skills. These two plights have urged the imitation and replication of the biological systems in terms of computing, sensing, and even motoring. Hence, the so-called neuromorphic system has drawn worldwide attention in recent decade, which is aimed at addressing the aforementioned needs from the mimicking of neural system. The recent developments on emerging memory devices, nanotechnologies, and materials science have provided an unprecedented opportunity for this aim.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Quantization, training, parasitic resistance correction, and programming techniques of memristor-crossbar neural networks for edge intelligence 用于边缘智能的忆阻器-交叉棒神经网络的量化、训练、寄生电阻校正和编程技术
Neuromorphic Computing and Engineering Pub Date : 2022-06-13 DOI: 10.1088/2634-4386/ac781a
T. Nguyen, Jiyong An, Seokjin Oh, S. N. Truong, K. Min
{"title":"Quantization, training, parasitic resistance correction, and programming techniques of memristor-crossbar neural networks for edge intelligence","authors":"T. Nguyen, Jiyong An, Seokjin Oh, S. N. Truong, K. Min","doi":"10.1088/2634-4386/ac781a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac781a","url":null,"abstract":"In the internet-of-things era, edge intelligence is critical for overcoming the communication and computing energy crisis, which is unavoidable if cloud computing is used exclusively. Memristor crossbars with in-memory computing may be suitable for realizing edge intelligence hardware. They can perform both memory and computing functions, allowing for the development of low-power computing architectures that go beyond the von Neumann computer. For implementing edge-intelligence hardware with memristor crossbars, in this paper, we review various techniques such as quantization, training, parasitic resistance correction, and low-power crossbar programming, and so on. In particular, memristor crossbars can be considered to realize quantized neural networks with binary and ternary synapses. For preventing memristor defects from degrading edge intelligence performance, chip-in-the-loop training can be useful when training memristor crossbars. Another undesirable effect in memristor crossbars is parasitic resistances such as source, line, and neuron resistance, which worsens as crossbar size increases. Various circuit and software techniques can compensate for parasitic resistances like source, line, and neuron resistance. Finally, we discuss an energy-efficient programming method for updating synaptic weights in memristor crossbars, which is needed for learning the edge devices.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125316447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Self-organized nanoscale networks: are neuromorphic properties conserved in realistic device geometries? 自组织纳米级网络:神经形态特性在现实器件几何结构中是否守恒?
Neuromorphic Computing and Engineering Pub Date : 2022-05-31 DOI: 10.1088/2634-4386/ac74da
Z. Heywood, J. Mallinson, E. Galli, S. Acharya, S. Bose, Matthew Arnold, P. Bones, S. Brown
{"title":"Self-organized nanoscale networks: are neuromorphic properties conserved in realistic device geometries?","authors":"Z. Heywood, J. Mallinson, E. Galli, S. Acharya, S. Bose, Matthew Arnold, P. Bones, S. Brown","doi":"10.1088/2634-4386/ac74da","DOIUrl":"https://doi.org/10.1088/2634-4386/ac74da","url":null,"abstract":"Self-organised nanoscale networks are currently under investigation because of their potential to be used as novel neuromorphic computing systems. In these systems, electrical input and output signals will necessarily couple to the recurrent electrical signals within the network that provide brain-like functionality. This raises important questions as to whether practical electrode configurations and network geometries might influence the brain-like dynamics. We use the concept of criticality (which is itself a key charactistic of brain-like processing) to quantify the neuromorphic potential of the devices, and find that in most cases criticality, and therefore optimal information processing capability, is maintained. In particular we find that devices with multiple electrodes remain critical despite the concentration of current near the electrodes. We find that broad network activity is maintained because current still flows through the entire network. We also develop a formalism to allow a detailed analysis of the number of dominant paths through the network. For rectangular systems we show that the number of pathways decreases as the system size increases, which consequently causes a reduction in network activity.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129450438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Memristive devices based hardware for unlabeled data processing 用于无标签数据处理的基于硬件的忆阻装置
Neuromorphic Computing and Engineering Pub Date : 2022-05-25 DOI: 10.1088/2634-4386/ac734a
Zhuojian Xiao, Bonan Yan, Teng Zhang, Ru Huang, Yuchao Yang
{"title":"Memristive devices based hardware for unlabeled data processing","authors":"Zhuojian Xiao, Bonan Yan, Teng Zhang, Ru Huang, Yuchao Yang","doi":"10.1088/2634-4386/ac734a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac734a","url":null,"abstract":"Unlabeled data processing is of great significance for artificial intelligence (AI), since well-structured labeled data are scarce in a majority of practical applications due to the high cost of human annotation of labeling data. Therefore, automatous analysis of unlabeled datasets is important, and relevant algorithms for processing unlabeled data, such as k-means clustering, restricted Boltzmann machine and locally competitive algorithms etc, play a critical role in the development of AI techniques. Memristive devices offer potential for power and time efficient implementation of unlabeled data processing due to their unique properties in neuromorphic and in-memory computing. This review provides an overview of the design principles and applications of memristive devices for various unlabeled data processing and cognitive AI tasks.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129300198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Physics-based compact modelling of the analog dynamics of HfO x resistive memories 基于物理的HfO x电阻式存储器模拟动力学的紧凑建模
Neuromorphic Computing and Engineering Pub Date : 2022-05-25 DOI: 10.1088/2634-4386/ac7327
F. Vaccaro, S. Brivio, S. Perotto, A. G. Mauri, S. Spiga
{"title":"Physics-based compact modelling of the analog dynamics of HfO x resistive memories","authors":"F. Vaccaro, S. Brivio, S. Perotto, A. G. Mauri, S. Spiga","doi":"10.1088/2634-4386/ac7327","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7327","url":null,"abstract":"Resistive random access memories (RRAMs) constitute a class of memristive devices particularly appealing for bio-inspired computing schemes. In particular, the possibility of achieving analog control of the electrical conductivity of RRAM devices can be exploited to mimic the behaviour of biological synapses in neuromorphic systems. With a view to neuromorphic computing applications, it turns out to be crucial to guarantee some features, among which a detailed device characterization, a mathematical modelling comprehensive of all the key features of the device both in quasi-static and dynamic conditions, a description of the variability due to the inherently stochasticity of the processes involved in the switching transitions. In this paper, starting from experimental data, we provide a modelling and simulation framework to reproduce the operative analog behaviour of HfO x -based RRAM devices under train of programming pulses both in the analog and binary operation mode. To this aim, we have calibrated the model by using a single set of parameters for the quasi-static current–voltage characteristics as well as switching kinetics and device dynamics. The physics-based compact model here settled captures the difference between the SET and the RESET processes in the I–V characteristics, as well as the device memory window both for strong and weak programming conditions. Moreover, the model reproduces the correct slopes of the highly non-linear kinetics curves over several orders of magnitudes in time, and the dynamic device response including the inherent device variability.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131496566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computational properties of multi-compartment LIF neurons with passive dendrites 具有被动树突的多室LIF神经元的计算特性
Neuromorphic Computing and Engineering Pub Date : 2022-05-23 DOI: 10.1088/2634-4386/ac724c
Andreas Stöckel, C. Eliasmith
{"title":"Computational properties of multi-compartment LIF neurons with passive dendrites","authors":"Andreas Stöckel, C. Eliasmith","doi":"10.1088/2634-4386/ac724c","DOIUrl":"https://doi.org/10.1088/2634-4386/ac724c","url":null,"abstract":"Mixed-signal neuromorphic computers often emulate some variant of the LIF neuron model. While, in theory, two-layer networks of these neurons are universal function approximators, single-layer networks consisting of slightly more complex neurons can, at the cost of universality, be more efficient. In this paper, we discuss a family of LIF neurons with passive dendrites. We provide rules that describe how input channels targeting different dendritic compartments interact, and test in how far these interactions can be harnessed in a spiking neural network context. We find that a single layer of two-compartment neurons approximates some functions at smaller errors than similarly sized hidden-layer networks. Single-layer networks with with three compartment neurons can approximate functions such as XOR and four-quadrant multiplication well; adding more compartments only offers small improvements in accuracy. From the perspective of mixed-signal neuromorphic systems, our results suggest that only small modifications to the neuron circuit are necessary to construct more computationally powerful and energy efficient systems that move more computation into the dendritic, analogue domain.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115605102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信