Uncontrolled Learning: Codesign of Neuromorphic Hardware Topology for Neuromorphic Algorithms

IF 6.1 Q1 AUTOMATION & CONTROL SYSTEMS
Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo
{"title":"Uncontrolled Learning: Codesign of Neuromorphic Hardware Topology for Neuromorphic Algorithms","authors":"Frank Barrows,&nbsp;Jonathan Lin,&nbsp;Francesco Caravelli,&nbsp;Dante R. Chialvo","doi":"10.1002/aisy.202400739","DOIUrl":null,"url":null,"abstract":"<p>Neuromorphic computing has the potential to revolutionize future technologies and our understanding of intelligence, yet it remains challenging to realize in practice. The learning-from-mistakes algorithm, inspired by the brain's simple learning rules of inhibition and pruning, is one of the few brain-like training methods. This algorithm is implemented in neuromorphic memristive hardware through a codesign process that evaluates essential hardware trade-offs. While the algorithm effectively trains small networks as binary classifiers and perceptrons, performance declines significantly with increasing network size unless the hardware is tailored to the algorithm. This work investigates the trade-offs between depth, controllability, and capacity—the number of learnable patterns—in neuromorphic hardware. This highlights the importance of topology and governing equations, providing theoretical tools to evaluate a device's computational capacity based on its measurements and circuit structure. The findings show that breaking neural network symmetry enhances both controllability and capacity. Additionally, by pruning the circuit, neuromorphic algorithms in all-memristive circuits can utilize stochastic resources to create local contrasts in network weights. Through combined experimental and simulation efforts, the parameters are identified that enable networks to exhibit emergent intelligence from simple rules, advancing the potential of neuromorphic computing.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":"7 7","pages":""},"PeriodicalIF":6.1000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400739","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://advanced.onlinelibrary.wiley.com/doi/10.1002/aisy.202400739","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Neuromorphic computing has the potential to revolutionize future technologies and our understanding of intelligence, yet it remains challenging to realize in practice. The learning-from-mistakes algorithm, inspired by the brain's simple learning rules of inhibition and pruning, is one of the few brain-like training methods. This algorithm is implemented in neuromorphic memristive hardware through a codesign process that evaluates essential hardware trade-offs. While the algorithm effectively trains small networks as binary classifiers and perceptrons, performance declines significantly with increasing network size unless the hardware is tailored to the algorithm. This work investigates the trade-offs between depth, controllability, and capacity—the number of learnable patterns—in neuromorphic hardware. This highlights the importance of topology and governing equations, providing theoretical tools to evaluate a device's computational capacity based on its measurements and circuit structure. The findings show that breaking neural network symmetry enhances both controllability and capacity. Additionally, by pruning the circuit, neuromorphic algorithms in all-memristive circuits can utilize stochastic resources to create local contrasts in network weights. Through combined experimental and simulation efforts, the parameters are identified that enable networks to exhibit emergent intelligence from simple rules, advancing the potential of neuromorphic computing.

Abstract Image

Abstract Image

Abstract Image

非受控学习:神经形态算法的神经形态硬件拓扑协同设计
神经形态计算有可能彻底改变未来的技术和我们对智能的理解,但在实践中实现它仍然具有挑战性。从错误中学习算法的灵感来自大脑的简单学习规则,即抑制和修剪,是为数不多的类似大脑的训练方法之一。该算法通过评估基本硬件权衡的协同设计过程在神经形态记忆硬件中实现。虽然该算法有效地将小型网络训练为二元分类器和感知器,但随着网络规模的增加,性能会显著下降,除非硬件适合该算法。这项工作研究了神经形态硬件中深度、可控性和容量(可学习模式的数量)之间的权衡。这突出了拓扑和控制方程的重要性,为基于其测量和电路结构评估设备的计算能力提供了理论工具。研究结果表明,破坏神经网络的对称性可以增强网络的可控性和容量。此外,通过修剪电路,全忆阻电路中的神经形态算法可以利用随机资源来创建网络权重的局部对比。通过实验和仿真相结合的努力,确定了使网络能够从简单规则中表现出紧急智能的参数,推进了神经形态计算的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信