An Efficient Event-driven Neuromorphic Architecture for Deep Spiking Neural Networks

Duy-Anh Nguyen, Duy-Hieu Bui, F. Iacopi, Xuan-Tu Tran
{"title":"An Efficient Event-driven Neuromorphic Architecture for Deep Spiking Neural Networks","authors":"Duy-Anh Nguyen, Duy-Hieu Bui, F. Iacopi, Xuan-Tu Tran","doi":"10.1109/SOCC46988.2019.1570548305","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) have been successfully applied to various real-world machine learning applications. However, performing large DNN inference tasks in real-time remains a challenge due to its substantial computational costs. Recently, Spiking Neural Networks (SNNs) have emerged as an alternative way of processing DNN’fs task. Due to its eventbased, data-driven computation, SNN reduces both inference latency and complexity. With efficient conversion methods from traditional DNN, SNN exhibits similar accuracy, while leveraging many state-of-the-art network models and training methods. In this work, an efficient neuromorphic hardware architecture for image recognition task is presented. To preserve accuracy, the analog-to-spiking conversion algorithm is adopted. The system aims to minimize hardware area cost and power consumption, enabling neuromorphic hardware processing in edge devices. Simulation results have shown that, with the MNIST digit recognition task, the system has achieved $\\times 20$ reduction in terms of core area cost compared to the state-of-the-art works, with an accuracy of 94.4%, core area of 15 $\\mu m^{2}$ at a maximum frequency of 250 MHz.","PeriodicalId":253998,"journal":{"name":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SOCC46988.2019.1570548305","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Deep Neural Networks (DNNs) have been successfully applied to various real-world machine learning applications. However, performing large DNN inference tasks in real-time remains a challenge due to its substantial computational costs. Recently, Spiking Neural Networks (SNNs) have emerged as an alternative way of processing DNN’fs task. Due to its eventbased, data-driven computation, SNN reduces both inference latency and complexity. With efficient conversion methods from traditional DNN, SNN exhibits similar accuracy, while leveraging many state-of-the-art network models and training methods. In this work, an efficient neuromorphic hardware architecture for image recognition task is presented. To preserve accuracy, the analog-to-spiking conversion algorithm is adopted. The system aims to minimize hardware area cost and power consumption, enabling neuromorphic hardware processing in edge devices. Simulation results have shown that, with the MNIST digit recognition task, the system has achieved $\times 20$ reduction in terms of core area cost compared to the state-of-the-art works, with an accuracy of 94.4%, core area of 15 $\mu m^{2}$ at a maximum frequency of 250 MHz.
一种高效的事件驱动深度脉冲神经网络神经形态架构
深度神经网络(dnn)已经成功地应用于各种现实世界的机器学习应用。然而,由于其巨大的计算成本,实时执行大型DNN推理任务仍然是一个挑战。近年来,脉冲神经网络(SNNs)作为处理DNN ' s任务的一种替代方法出现了。由于其基于事件、数据驱动的计算,SNN减少了推理延迟和复杂性。利用传统深度神经网络的有效转换方法,SNN在利用许多最先进的网络模型和训练方法的同时,显示出类似的准确性。在这项工作中,提出了一种高效的神经形态图像识别硬件架构。为了保证精度,采用了模拟-尖峰转换算法。该系统旨在最大限度地降低硬件面积成本和功耗,使边缘设备中的神经形态硬件处理成为可能。仿真结果表明,在MNIST数字识别任务下,该系统在最大频率为250 MHz时的核心区面积为15 $\mu m^{2}$,准确率达到94.4%,核心区成本比现有技术降低了20$ $。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信