H. Esmaeilzadeh, M. Jamali, P. Saeedi, A. Moghimi, C. Lucas, S. M. Fakhraie
{"title":"NNEP, Design Pattern for Neural-Network-Based Embedded Systems","authors":"H. Esmaeilzadeh, M. Jamali, P. Saeedi, A. Moghimi, C. Lucas, S. M. Fakhraie","doi":"10.1109/MIXDES.2007.4286248","DOIUrl":null,"url":null,"abstract":"With time-to-market getting the most important issue in system design, reusing the design experiences as well as the IP cores is becoming very critical. Design patterns, intended for simplifying the reuse process, are design experiences that worked well in the past and documented to be reused in the future. In this paper, a design pattern named NnEP (Neural-network-based Embedded systems design Pattern) is introduced for employing neural networks, common bio-inspired solutions, in SoC-based embedded systems. This pattern is based on NnSP IP suite, a stream processing core and its tool chain, NnSP Builder and Stream Compiler. NnEP is introduced for enhancing and automating reuse in design of intelligent SoC's requiring high-speed parallel computations specially those based on neural networks. The NnEP pattern consists of the semi-automated steps, extracted from design experiences, a designer takes using the provided software suite to realize a NN application in an intelligent SoC. This includes the application analysis and pre-processing procedure, building the best-match IP core with the application, and finally compiling the intended NN application on the target IP core. On the other hand, ASIC 0.18 mum implementation results of NnSP soft core show that the core can achieve the speed of 51.2 GOPS, 25.6 GMAC/s. This throughput is comparable with the existing parallel solutions and higher in an order of magnitude from common general-purpose-processor-based solutions. This high throughput in conjunction with the inherent reusable architecture of NnSP, makes NnEP a powerful design pattern for cutting-edge neural-network-based embedded applications such as pattern recognition which is elaborated as a case study in the proposed design pattern.","PeriodicalId":310187,"journal":{"name":"2007 14th International Conference on Mixed Design of Integrated Circuits and Systems","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 14th International Conference on Mixed Design of Integrated Circuits and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MIXDES.2007.4286248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With time-to-market getting the most important issue in system design, reusing the design experiences as well as the IP cores is becoming very critical. Design patterns, intended for simplifying the reuse process, are design experiences that worked well in the past and documented to be reused in the future. In this paper, a design pattern named NnEP (Neural-network-based Embedded systems design Pattern) is introduced for employing neural networks, common bio-inspired solutions, in SoC-based embedded systems. This pattern is based on NnSP IP suite, a stream processing core and its tool chain, NnSP Builder and Stream Compiler. NnEP is introduced for enhancing and automating reuse in design of intelligent SoC's requiring high-speed parallel computations specially those based on neural networks. The NnEP pattern consists of the semi-automated steps, extracted from design experiences, a designer takes using the provided software suite to realize a NN application in an intelligent SoC. This includes the application analysis and pre-processing procedure, building the best-match IP core with the application, and finally compiling the intended NN application on the target IP core. On the other hand, ASIC 0.18 mum implementation results of NnSP soft core show that the core can achieve the speed of 51.2 GOPS, 25.6 GMAC/s. This throughput is comparable with the existing parallel solutions and higher in an order of magnitude from common general-purpose-processor-based solutions. This high throughput in conjunction with the inherent reusable architecture of NnSP, makes NnEP a powerful design pattern for cutting-edge neural-network-based embedded applications such as pattern recognition which is elaborated as a case study in the proposed design pattern.