NNEP, Design Pattern for Neural-Network-Based Embedded Systems

H. Esmaeilzadeh, M. Jamali, P. Saeedi, A. Moghimi, C. Lucas, S. M. Fakhraie
{"title":"NNEP, Design Pattern for Neural-Network-Based Embedded Systems","authors":"H. Esmaeilzadeh, M. Jamali, P. Saeedi, A. Moghimi, C. Lucas, S. M. Fakhraie","doi":"10.1109/MIXDES.2007.4286248","DOIUrl":null,"url":null,"abstract":"With time-to-market getting the most important issue in system design, reusing the design experiences as well as the IP cores is becoming very critical. Design patterns, intended for simplifying the reuse process, are design experiences that worked well in the past and documented to be reused in the future. In this paper, a design pattern named NnEP (Neural-network-based Embedded systems design Pattern) is introduced for employing neural networks, common bio-inspired solutions, in SoC-based embedded systems. This pattern is based on NnSP IP suite, a stream processing core and its tool chain, NnSP Builder and Stream Compiler. NnEP is introduced for enhancing and automating reuse in design of intelligent SoC's requiring high-speed parallel computations specially those based on neural networks. The NnEP pattern consists of the semi-automated steps, extracted from design experiences, a designer takes using the provided software suite to realize a NN application in an intelligent SoC. This includes the application analysis and pre-processing procedure, building the best-match IP core with the application, and finally compiling the intended NN application on the target IP core. On the other hand, ASIC 0.18 mum implementation results of NnSP soft core show that the core can achieve the speed of 51.2 GOPS, 25.6 GMAC/s. This throughput is comparable with the existing parallel solutions and higher in an order of magnitude from common general-purpose-processor-based solutions. This high throughput in conjunction with the inherent reusable architecture of NnSP, makes NnEP a powerful design pattern for cutting-edge neural-network-based embedded applications such as pattern recognition which is elaborated as a case study in the proposed design pattern.","PeriodicalId":310187,"journal":{"name":"2007 14th International Conference on Mixed Design of Integrated Circuits and Systems","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 14th International Conference on Mixed Design of Integrated Circuits and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MIXDES.2007.4286248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

With time-to-market getting the most important issue in system design, reusing the design experiences as well as the IP cores is becoming very critical. Design patterns, intended for simplifying the reuse process, are design experiences that worked well in the past and documented to be reused in the future. In this paper, a design pattern named NnEP (Neural-network-based Embedded systems design Pattern) is introduced for employing neural networks, common bio-inspired solutions, in SoC-based embedded systems. This pattern is based on NnSP IP suite, a stream processing core and its tool chain, NnSP Builder and Stream Compiler. NnEP is introduced for enhancing and automating reuse in design of intelligent SoC's requiring high-speed parallel computations specially those based on neural networks. The NnEP pattern consists of the semi-automated steps, extracted from design experiences, a designer takes using the provided software suite to realize a NN application in an intelligent SoC. This includes the application analysis and pre-processing procedure, building the best-match IP core with the application, and finally compiling the intended NN application on the target IP core. On the other hand, ASIC 0.18 mum implementation results of NnSP soft core show that the core can achieve the speed of 51.2 GOPS, 25.6 GMAC/s. This throughput is comparable with the existing parallel solutions and higher in an order of magnitude from common general-purpose-processor-based solutions. This high throughput in conjunction with the inherent reusable architecture of NnSP, makes NnEP a powerful design pattern for cutting-edge neural-network-based embedded applications such as pattern recognition which is elaborated as a case study in the proposed design pattern.
基于神经网络的嵌入式系统设计模式
随着上市时间成为系统设计中最重要的问题,重用设计经验和IP核变得非常关键。设计模式旨在简化重用过程,它是过去运行良好的设计经验,并记录下来以便将来重用。本文介绍了一种基于神经网络的嵌入式系统设计模式NnEP (neural -network-based Embedded systems design pattern),用于在基于soc的嵌入式系统中应用神经网络这一常见的仿生解决方案。该模式基于NnSP IP套件、流处理核心及其工具链——NnSP Builder和流编译器。引入NnEP是为了在需要高速并行计算的智能SoC设计中,特别是基于神经网络的智能SoC设计中增强重用和自动化重用。NnEP模式由半自动化步骤组成,从设计经验中提取,设计人员使用提供的软件套件在智能SoC中实现神经网络应用。这包括应用程序分析和预处理过程,构建与应用程序最匹配的IP核,最后在目标IP核上编译预期的NN应用程序。另一方面,NnSP软核的ASIC 0.18 mum实现结果表明,该核可以达到51.2 GOPS, 25.6 GMAC/s的速度。这种吞吐量与现有的并行解决方案相当,并且比基于通用处理器的通用解决方案高出一个数量级。这种高吞吐量与NnSP固有的可重用架构相结合,使NnEP成为基于尖端神经网络的嵌入式应用(如模式识别)的强大设计模式,该模式在提议的设计模式中作为案例研究进行了详细阐述。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信