Samip Karki, Diego Chavez Arana, Andrew Sornborger, Francesco Caravelli
{"title":"采用尖峰神经网络架构的神经形态片上水库计算","authors":"Samip Karki, Diego Chavez Arana, Andrew Sornborger, Francesco Caravelli","doi":"arxiv-2407.20547","DOIUrl":null,"url":null,"abstract":"Reservoir computing is a promising approach for harnessing the computational\npower of recurrent neural networks while dramatically simplifying training.\nThis paper investigates the application of integrate-and-fire neurons within\nreservoir computing frameworks for two distinct tasks: capturing chaotic\ndynamics of the H\\'enon map and forecasting the Mackey-Glass time series.\nIntegrate-and-fire neurons can be implemented in low-power neuromorphic\narchitectures such as Intel Loihi. We explore the impact of network topologies\ncreated through random interactions on the reservoir's performance. Our study\nreveals task-specific variations in network effectiveness, highlighting the\nimportance of tailored architectures for distinct computational tasks. To\nidentify optimal network configurations, we employ a meta-learning approach\ncombined with simulated annealing. This method efficiently explores the space\nof possible network structures, identifying architectures that excel in\ndifferent scenarios. The resulting networks demonstrate a range of behaviors,\nshowcasing how inherent architectural features influence task-specific\ncapabilities. We study the reservoir computing performance using a custom\nintegrate-and-fire code, Intel's Lava neuromorphic computing software\nframework, and via an on-chip implementation in Loihi. We conclude with an\nanalysis of the energy performance of the Loihi architecture.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neuromorphic on-chip reservoir computing with spiking neural network architectures\",\"authors\":\"Samip Karki, Diego Chavez Arana, Andrew Sornborger, Francesco Caravelli\",\"doi\":\"arxiv-2407.20547\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reservoir computing is a promising approach for harnessing the computational\\npower of recurrent neural networks while dramatically simplifying training.\\nThis paper investigates the application of integrate-and-fire neurons within\\nreservoir computing frameworks for two distinct tasks: capturing chaotic\\ndynamics of the H\\\\'enon map and forecasting the Mackey-Glass time series.\\nIntegrate-and-fire neurons can be implemented in low-power neuromorphic\\narchitectures such as Intel Loihi. We explore the impact of network topologies\\ncreated through random interactions on the reservoir's performance. Our study\\nreveals task-specific variations in network effectiveness, highlighting the\\nimportance of tailored architectures for distinct computational tasks. To\\nidentify optimal network configurations, we employ a meta-learning approach\\ncombined with simulated annealing. This method efficiently explores the space\\nof possible network structures, identifying architectures that excel in\\ndifferent scenarios. The resulting networks demonstrate a range of behaviors,\\nshowcasing how inherent architectural features influence task-specific\\ncapabilities. We study the reservoir computing performance using a custom\\nintegrate-and-fire code, Intel's Lava neuromorphic computing software\\nframework, and via an on-chip implementation in Loihi. We conclude with an\\nanalysis of the energy performance of the Loihi architecture.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"75 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.20547\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20547","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neuromorphic on-chip reservoir computing with spiking neural network architectures
Reservoir computing is a promising approach for harnessing the computational
power of recurrent neural networks while dramatically simplifying training.
This paper investigates the application of integrate-and-fire neurons within
reservoir computing frameworks for two distinct tasks: capturing chaotic
dynamics of the H\'enon map and forecasting the Mackey-Glass time series.
Integrate-and-fire neurons can be implemented in low-power neuromorphic
architectures such as Intel Loihi. We explore the impact of network topologies
created through random interactions on the reservoir's performance. Our study
reveals task-specific variations in network effectiveness, highlighting the
importance of tailored architectures for distinct computational tasks. To
identify optimal network configurations, we employ a meta-learning approach
combined with simulated annealing. This method efficiently explores the space
of possible network structures, identifying architectures that excel in
different scenarios. The resulting networks demonstrate a range of behaviors,
showcasing how inherent architectural features influence task-specific
capabilities. We study the reservoir computing performance using a custom
integrate-and-fire code, Intel's Lava neuromorphic computing software
framework, and via an on-chip implementation in Loihi. We conclude with an
analysis of the energy performance of the Loihi architecture.