不受控制的学习:为神经形态算法共同设计神经形态硬件拓扑结构

Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo
{"title":"不受控制的学习:为神经形态算法共同设计神经形态硬件拓扑结构","authors":"Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo","doi":"arxiv-2408.05183","DOIUrl":null,"url":null,"abstract":"Hardware-based neuromorphic computing remains an elusive goal with the\npotential to profoundly impact future technologies and deepen our understanding\nof emergent intelligence. The learning-from-mistakes algorithm is one of the\nfew training algorithms inspired by the brain's simple learning rules,\nutilizing inhibition and pruning to demonstrate self-organized learning. Here\nwe implement this algorithm in purely neuromorphic memristive hardware through\na co-design process. This implementation requires evaluating hardware\ntrade-offs and constraints. It has been shown that learning-from-mistakes\nsuccessfully trains small networks to function as binary classifiers and\nperceptrons. However, without tailoring the hardware to the algorithm,\nperformance decreases exponentially as the network size increases. When\nimplementing neuromorphic algorithms on neuromorphic hardware, we investigate\nthe trade-offs between depth, controllability, and capacity, the latter being\nthe number of learnable patterns. We emphasize the significance of topology and\nthe use of governing equations, demonstrating theoretical tools to aid in the\nco-design of neuromorphic hardware and algorithms. We provide quantitative\ntechniques to evaluate the computational capacity of a neuromorphic device\nbased on the measurements performed and the underlying circuit structure. This\napproach shows that breaking the symmetry of a neural network can increase both\nthe controllability and average network capacity. By pruning the circuit,\nneuromorphic algorithms in all-memristive device circuits leverage stochastic\nresources to drive local contrast in network weights. Our combined experimental\nand simulation efforts explore the parameters that make a network suited for\ndisplaying emergent intelligence from simple rules.","PeriodicalId":501520,"journal":{"name":"arXiv - PHYS - Statistical Mechanics","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uncontrolled learning: co-design of neuromorphic hardware topology for neuromorphic algorithms\",\"authors\":\"Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo\",\"doi\":\"arxiv-2408.05183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hardware-based neuromorphic computing remains an elusive goal with the\\npotential to profoundly impact future technologies and deepen our understanding\\nof emergent intelligence. The learning-from-mistakes algorithm is one of the\\nfew training algorithms inspired by the brain's simple learning rules,\\nutilizing inhibition and pruning to demonstrate self-organized learning. Here\\nwe implement this algorithm in purely neuromorphic memristive hardware through\\na co-design process. This implementation requires evaluating hardware\\ntrade-offs and constraints. It has been shown that learning-from-mistakes\\nsuccessfully trains small networks to function as binary classifiers and\\nperceptrons. However, without tailoring the hardware to the algorithm,\\nperformance decreases exponentially as the network size increases. When\\nimplementing neuromorphic algorithms on neuromorphic hardware, we investigate\\nthe trade-offs between depth, controllability, and capacity, the latter being\\nthe number of learnable patterns. We emphasize the significance of topology and\\nthe use of governing equations, demonstrating theoretical tools to aid in the\\nco-design of neuromorphic hardware and algorithms. We provide quantitative\\ntechniques to evaluate the computational capacity of a neuromorphic device\\nbased on the measurements performed and the underlying circuit structure. This\\napproach shows that breaking the symmetry of a neural network can increase both\\nthe controllability and average network capacity. By pruning the circuit,\\nneuromorphic algorithms in all-memristive device circuits leverage stochastic\\nresources to drive local contrast in network weights. Our combined experimental\\nand simulation efforts explore the parameters that make a network suited for\\ndisplaying emergent intelligence from simple rules.\",\"PeriodicalId\":501520,\"journal\":{\"name\":\"arXiv - PHYS - Statistical Mechanics\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Statistical Mechanics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.05183\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Statistical Mechanics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于硬件的神经形态计算仍然是一个难以实现的目标,它有可能对未来技术产生深远影响,并加深我们对新兴智能的理解。从错误中学习算法是受大脑简单学习规则启发的少数训练算法之一,它利用抑制和剪枝来展示自组织学习。在这里,我们通过协同设计过程,在纯神经形态记忆硬件中实现了这一算法。这种实现需要评估硬件的取舍和限制。研究表明,"从错误中学习 "能够成功地训练小型网络,使其发挥二进制分类器和感知器的功能。然而,如果不根据算法定制硬件,随着网络规模的增加,性能会呈指数级下降。在神经形态硬件上实现神经形态算法时,我们研究了深度、可控性和容量(后者是可学习模式的数量)之间的权衡。我们强调拓扑结构的重要性,并使用治理方程,展示有助于神经形态硬件和算法协同设计的理论工具。我们提供了定量技术,根据所进行的测量和底层电路结构来评估神经形态设备的计算能力。这种方法表明,打破神经网络的对称性可以提高可控性和平均网络容量。通过修剪电路,全畸变器件电路中的神经形态算法利用随机资源来驱动网络权重的局部对比。我们通过实验和模拟相结合的方式,探索了使网络适合从简单规则中展现新兴智能的参数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Uncontrolled learning: co-design of neuromorphic hardware topology for neuromorphic algorithms
Hardware-based neuromorphic computing remains an elusive goal with the potential to profoundly impact future technologies and deepen our understanding of emergent intelligence. The learning-from-mistakes algorithm is one of the few training algorithms inspired by the brain's simple learning rules, utilizing inhibition and pruning to demonstrate self-organized learning. Here we implement this algorithm in purely neuromorphic memristive hardware through a co-design process. This implementation requires evaluating hardware trade-offs and constraints. It has been shown that learning-from-mistakes successfully trains small networks to function as binary classifiers and perceptrons. However, without tailoring the hardware to the algorithm, performance decreases exponentially as the network size increases. When implementing neuromorphic algorithms on neuromorphic hardware, we investigate the trade-offs between depth, controllability, and capacity, the latter being the number of learnable patterns. We emphasize the significance of topology and the use of governing equations, demonstrating theoretical tools to aid in the co-design of neuromorphic hardware and algorithms. We provide quantitative techniques to evaluate the computational capacity of a neuromorphic device based on the measurements performed and the underlying circuit structure. This approach shows that breaking the symmetry of a neural network can increase both the controllability and average network capacity. By pruning the circuit, neuromorphic algorithms in all-memristive device circuits leverage stochastic resources to drive local contrast in network weights. Our combined experimental and simulation efforts explore the parameters that make a network suited for displaying emergent intelligence from simple rules.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信