{"title":"The lattice-ladder neuron and its training circuit implementation in FPGA","authors":"T. Sledevič, D. Navakauskas","doi":"10.1109/AIEEE.2014.7020327","DOIUrl":null,"url":null,"abstract":"FPGA implementation of a lattice-ladder multilayer perceptron structure together with its training algorithm in a full scale seems attractive, however there is a lack of preliminary results on the choice of implementation architecture. The aim of this investigation was to get insights on the selected neuron model fixed-point architecture (necessary to use word length) and its complexity (required number of LUT and DSP slices and BRAM size) by the evaluation of the reproduced by lattice-ladder neuron accuracy of bandwidth and central frequency as also as output signal normalized mean error. Thus the second order fixed-point normalized lattice-ladder neuron with its training algorithm was implemented in Artix-7 FPGA. The experiments were performed using various bandwidths and word length constrains. In general increase of word length yielded smaller mean errors. However the limited size BRAM used for trigonometric function LUTs was a bottleneck to improve the precision while doubling the number of DSP slices.","PeriodicalId":117147,"journal":{"name":"2014 IEEE 2nd Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 2nd Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIEEE.2014.7020327","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The lattice-ladder neuron and its training circuit implementation in FPGA
FPGA implementation of a lattice-ladder multilayer perceptron structure together with its training algorithm in a full scale seems attractive, however there is a lack of preliminary results on the choice of implementation architecture. The aim of this investigation was to get insights on the selected neuron model fixed-point architecture (necessary to use word length) and its complexity (required number of LUT and DSP slices and BRAM size) by the evaluation of the reproduced by lattice-ladder neuron accuracy of bandwidth and central frequency as also as output signal normalized mean error. Thus the second order fixed-point normalized lattice-ladder neuron with its training algorithm was implemented in Artix-7 FPGA. The experiments were performed using various bandwidths and word length constrains. In general increase of word length yielded smaller mean errors. However the limited size BRAM used for trigonometric function LUTs was a bottleneck to improve the precision while doubling the number of DSP slices.