Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo
{"title":"不受控制的学习:为神经形态算法共同设计神经形态硬件拓扑结构","authors":"Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo","doi":"arxiv-2408.05183","DOIUrl":null,"url":null,"abstract":"Hardware-based neuromorphic computing remains an elusive goal with the\npotential to profoundly impact future technologies and deepen our understanding\nof emergent intelligence. The learning-from-mistakes algorithm is one of the\nfew training algorithms inspired by the brain's simple learning rules,\nutilizing inhibition and pruning to demonstrate self-organized learning. Here\nwe implement this algorithm in purely neuromorphic memristive hardware through\na co-design process. This implementation requires evaluating hardware\ntrade-offs and constraints. It has been shown that learning-from-mistakes\nsuccessfully trains small networks to function as binary classifiers and\nperceptrons. However, without tailoring the hardware to the algorithm,\nperformance decreases exponentially as the network size increases. When\nimplementing neuromorphic algorithms on neuromorphic hardware, we investigate\nthe trade-offs between depth, controllability, and capacity, the latter being\nthe number of learnable patterns. We emphasize the significance of topology and\nthe use of governing equations, demonstrating theoretical tools to aid in the\nco-design of neuromorphic hardware and algorithms. We provide quantitative\ntechniques to evaluate the computational capacity of a neuromorphic device\nbased on the measurements performed and the underlying circuit structure. This\napproach shows that breaking the symmetry of a neural network can increase both\nthe controllability and average network capacity. By pruning the circuit,\nneuromorphic algorithms in all-memristive device circuits leverage stochastic\nresources to drive local contrast in network weights. Our combined experimental\nand simulation efforts explore the parameters that make a network suited for\ndisplaying emergent intelligence from simple rules.","PeriodicalId":501520,"journal":{"name":"arXiv - PHYS - Statistical Mechanics","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uncontrolled learning: co-design of neuromorphic hardware topology for neuromorphic algorithms\",\"authors\":\"Frank Barrows, Jonathan Lin, Francesco Caravelli, Dante R. Chialvo\",\"doi\":\"arxiv-2408.05183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hardware-based neuromorphic computing remains an elusive goal with the\\npotential to profoundly impact future technologies and deepen our understanding\\nof emergent intelligence. The learning-from-mistakes algorithm is one of the\\nfew training algorithms inspired by the brain's simple learning rules,\\nutilizing inhibition and pruning to demonstrate self-organized learning. Here\\nwe implement this algorithm in purely neuromorphic memristive hardware through\\na co-design process. This implementation requires evaluating hardware\\ntrade-offs and constraints. It has been shown that learning-from-mistakes\\nsuccessfully trains small networks to function as binary classifiers and\\nperceptrons. However, without tailoring the hardware to the algorithm,\\nperformance decreases exponentially as the network size increases. When\\nimplementing neuromorphic algorithms on neuromorphic hardware, we investigate\\nthe trade-offs between depth, controllability, and capacity, the latter being\\nthe number of learnable patterns. We emphasize the significance of topology and\\nthe use of governing equations, demonstrating theoretical tools to aid in the\\nco-design of neuromorphic hardware and algorithms. We provide quantitative\\ntechniques to evaluate the computational capacity of a neuromorphic device\\nbased on the measurements performed and the underlying circuit structure. This\\napproach shows that breaking the symmetry of a neural network can increase both\\nthe controllability and average network capacity. By pruning the circuit,\\nneuromorphic algorithms in all-memristive device circuits leverage stochastic\\nresources to drive local contrast in network weights. Our combined experimental\\nand simulation efforts explore the parameters that make a network suited for\\ndisplaying emergent intelligence from simple rules.\",\"PeriodicalId\":501520,\"journal\":{\"name\":\"arXiv - PHYS - Statistical Mechanics\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Statistical Mechanics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.05183\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Statistical Mechanics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Uncontrolled learning: co-design of neuromorphic hardware topology for neuromorphic algorithms
Hardware-based neuromorphic computing remains an elusive goal with the
potential to profoundly impact future technologies and deepen our understanding
of emergent intelligence. The learning-from-mistakes algorithm is one of the
few training algorithms inspired by the brain's simple learning rules,
utilizing inhibition and pruning to demonstrate self-organized learning. Here
we implement this algorithm in purely neuromorphic memristive hardware through
a co-design process. This implementation requires evaluating hardware
trade-offs and constraints. It has been shown that learning-from-mistakes
successfully trains small networks to function as binary classifiers and
perceptrons. However, without tailoring the hardware to the algorithm,
performance decreases exponentially as the network size increases. When
implementing neuromorphic algorithms on neuromorphic hardware, we investigate
the trade-offs between depth, controllability, and capacity, the latter being
the number of learnable patterns. We emphasize the significance of topology and
the use of governing equations, demonstrating theoretical tools to aid in the
co-design of neuromorphic hardware and algorithms. We provide quantitative
techniques to evaluate the computational capacity of a neuromorphic device
based on the measurements performed and the underlying circuit structure. This
approach shows that breaking the symmetry of a neural network can increase both
the controllability and average network capacity. By pruning the circuit,
neuromorphic algorithms in all-memristive device circuits leverage stochastic
resources to drive local contrast in network weights. Our combined experimental
and simulation efforts explore the parameters that make a network suited for
displaying emergent intelligence from simple rules.