{"title":"Restricted learning algorithm and its application to neural network training","authors":"T. Miyamura, I. Yamada, K. Sakaniwa","doi":"10.1109/NNSP.1991.239528","DOIUrl":null,"url":null,"abstract":"The authors propose a new (semi)-optimization algorithm, called the restricted learning algorithm, for a nonnegative evaluating function which is 2 times continuously differentiable on a compact set Omega in R/sup N/. The restricted learning algorithm utilizes the maximal excluding regions which are newly derived, and is shown to converge to the global in -optimum in Omega . A most effective application of the proposed algorithm is the training of multi-layered neural networks. In this case, one can estimate the Lipschitz's constants for the evaluating function and its derivative very efficiently and thereby we can obtain sufficiently large excluding regions. It is confirmed through numerical examples that the proposed restricted learning algorithm provides much better performance than the conventional back propagation algorithm and its modified versions.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1991.239528","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The authors propose a new (semi)-optimization algorithm, called the restricted learning algorithm, for a nonnegative evaluating function which is 2 times continuously differentiable on a compact set Omega in R/sup N/. The restricted learning algorithm utilizes the maximal excluding regions which are newly derived, and is shown to converge to the global in -optimum in Omega . A most effective application of the proposed algorithm is the training of multi-layered neural networks. In this case, one can estimate the Lipschitz's constants for the evaluating function and its derivative very efficiently and thereby we can obtain sufficiently large excluding regions. It is confirmed through numerical examples that the proposed restricted learning algorithm provides much better performance than the conventional back propagation algorithm and its modified versions.<>