Efficient hardware design of spiking neurons and unsupervised learning module in large scale pattern classification network

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
{"title":"Efficient hardware design of spiking neurons and unsupervised learning module in large scale pattern classification network","authors":"","doi":"10.1016/j.engappai.2024.109255","DOIUrl":null,"url":null,"abstract":"<div><p>The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624014131","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The main interest of high-precision, low-energy computing in machines with superior intelligence capabilities is to improve the performance of biologically spiking neural networks (SNNs). In this paper, we address this by presenting a new power-law update of synaptic weights based on burst time-dependent plasticity (Pow-BTDP) as a digital learning block in a SNN model with multiplier-less neuron modules. Propelled by the request for accurate and fast computations that diminishes costly resources in neural network applications, this paper introduces an efficient hardware methodology based on linear approximations. The presented hardware designs based on linear approximation of non-linear terms in learning module (exponential and fractional power) and neuron blocks (second power) are carefully elaborated to guarantee optimal speedup, low resource consumption, and accuracy. The architectures developed for Exp and Power implementations are illustrated and evaluated, leading to the presentation of digital learning module and neuron block that enable efficient and accurate hardware computation. The proposed digital modules of learning mechanism and neuron was used to construct large scale event-based spiking neural network comprising of three layers, enabling unsupervised training with variable learning rate utilizing excitatory and inhibitory neural connections. As a results, the proposed bio-inspired SNN as a spiking pattern classification network with the proposed Pow-BTDP learning approach, by training on MNIST, EMNIST digits, EMNIST letters, and CIFAR10 datasets with respectively 6, 2, 2 and 6 training epochs, achieved superior accuracy 97.9%, 97.8%, 94.2%, and 93.3% which indicate higher accuracy and convergence speed compare to previous works.

大规模模式分类网络中尖峰神经元和无监督学习模块的高效硬件设计
在具有卓越智能能力的机器中实现高精度、低能耗计算的主要目的是提高生物尖峰神经网络(SNN)的性能。本文针对这一问题,提出了一种基于突发性时间可塑性(Pow-BTDP)的新型突触权重幂律更新方法,作为具有无乘法器神经元模块的 SNN 模型中的数字学习模块。在神经网络应用中,人们要求精确、快速的计算,以减少昂贵的资源,在此推动下,本文介绍了一种基于线性近似的高效硬件方法。本文介绍的硬件设计基于学习模块(指数和分数幂)和神经元模块(第二幂)中非线性项的线性近似,经过精心设计,保证了最佳的速度、低资源消耗和准确性。我们对为指数和功率实现开发的架构进行了说明和评估,从而提出了能够实现高效、精确硬件计算的数字学习模块和神经元块。所提出的学习机制和神经元数字模块被用于构建由三层组成的大规模基于事件的尖峰神经网络,利用兴奋和抑制神经连接实现学习率可变的无监督训练。结果表明,通过在 MNIST、EMNIST 数字、EMNIST 字母和 CIFAR10 数据集上分别以 6、2、2 和 6 个训练历元进行训练,所提出的生物启发 SNN 作为尖峰模式分类网络,采用了所提出的 Pow-BTDP 学习方法,分别取得了 97.9%、97.8%、94.2% 和 93.3% 的优异准确率,表明其准确率和收敛速度均高于之前的研究成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信