{"title":"Improvement on the learning performance of multiplierless multilayer neural network","authors":"H. Hikawa","doi":"10.1109/ISCAS.1997.608907","DOIUrl":null,"url":null,"abstract":"In this paper, improved multiplierless multilayer neural network (MNN) with on-chip learning is proposed. Using three-state function as the activating function, multipliers are replaced by much simpler circuit. The back-propagation algorithm is modified to have no multiplier and the algorithm is implemented with pulse mode operation. This learning circuit is modified to improve the rate of successful learning. The derivative function of neurons which is used in the learning algorithm is changed for the higher learning rate. The modification is very simple, and the additional circuit for this modification is very small. To verify the feasibility of the proposed method, the modified MNN is implemented on FPGAs and tested by experiment, and the detail of the learning performance is tested by computer simulations. These results show that the learning rate can be greatly improved by using the proposed MNN architecture. Also, the experimental result shows that the proposed MNN has a very fast operation of 17.9/spl times/10/sup 6/ connections per second (CPS) and 11.7/spl times/10/sup 6/ connection updates per second (CUPS).","PeriodicalId":68559,"journal":{"name":"电路与系统学报","volume":"125 1","pages":"641-644 vol.1"},"PeriodicalIF":0.0000,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"电路与系统学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1109/ISCAS.1997.608907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
In this paper, improved multiplierless multilayer neural network (MNN) with on-chip learning is proposed. Using three-state function as the activating function, multipliers are replaced by much simpler circuit. The back-propagation algorithm is modified to have no multiplier and the algorithm is implemented with pulse mode operation. This learning circuit is modified to improve the rate of successful learning. The derivative function of neurons which is used in the learning algorithm is changed for the higher learning rate. The modification is very simple, and the additional circuit for this modification is very small. To verify the feasibility of the proposed method, the modified MNN is implemented on FPGAs and tested by experiment, and the detail of the learning performance is tested by computer simulations. These results show that the learning rate can be greatly improved by using the proposed MNN architecture. Also, the experimental result shows that the proposed MNN has a very fast operation of 17.9/spl times/10/sup 6/ connections per second (CPS) and 11.7/spl times/10/sup 6/ connection updates per second (CUPS).