{"title":"大规模模式分类网络中尖峰神经元和无监督学习模块的高效硬件设计","authors":"","doi":"10.1016/j.engappai.2024.109255","DOIUrl":null,"url":null,"abstract":"<div><p>The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient hardware design of spiking neurons and unsupervised learning module in large scale pattern classification network\",\"authors\":\"\",\"doi\":\"10.1016/j.engappai.2024.109255\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.</p></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624014131\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624014131","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Efficient hardware design of spiking neurons and unsupervised learning module in large scale pattern classification network
The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.