{"title":"Hardware friendly deep reservoir computing","authors":"Claudio Gallicchio , Miguel C. Soriano","doi":"10.1016/j.neunet.2025.108079","DOIUrl":null,"url":null,"abstract":"<div><div>Reservoir Computing (RC) is a popular approach for modeling dynamical Recurrent Neural Networks, featured by a fixed (i.e., untrained) recurrent reservoir layer. In this paper, we introduce a novel design strategy for deep RC neural networks that is especially suitable to neuromorphic hardware implementations. From the topological perspective, the introduced model presents a multi-level architecture with ring reservoir topology and one-to-one inter-reservoir connections. The proposed design also considers hardware-friendly nonlinearity and noise modeling in the reservoir update equations. We demonstrate the introduced hardware-friendly deep RC architecture in electronic hardware, showing the promising processing capabilities on learning tasks that require both nonlinear computation and short-term memory. Additionally, we validate the effectiveness of the introduced approach on several time-series classification tasks, showing its competitive performance compared to its shallow counterpart, conventional, as well as more recent RC systems. These results emphasize the advantages of the proposed deep architecture for both practical hardware-friendly environments and broader machine learning applications.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"193 ","pages":"Article 108079"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025009591","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Reservoir Computing (RC) is a popular approach for modeling dynamical Recurrent Neural Networks, featured by a fixed (i.e., untrained) recurrent reservoir layer. In this paper, we introduce a novel design strategy for deep RC neural networks that is especially suitable to neuromorphic hardware implementations. From the topological perspective, the introduced model presents a multi-level architecture with ring reservoir topology and one-to-one inter-reservoir connections. The proposed design also considers hardware-friendly nonlinearity and noise modeling in the reservoir update equations. We demonstrate the introduced hardware-friendly deep RC architecture in electronic hardware, showing the promising processing capabilities on learning tasks that require both nonlinear computation and short-term memory. Additionally, we validate the effectiveness of the introduced approach on several time-series classification tasks, showing its competitive performance compared to its shallow counterpart, conventional, as well as more recent RC systems. These results emphasize the advantages of the proposed deep architecture for both practical hardware-friendly environments and broader machine learning applications.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.