{"title":"The Improved Training Algorithm of Back Propagation Neural Network with Self-adaptive Learning Rate","authors":"Yong Li, Yang Fu, Hui Li, Siqi Zhang","doi":"10.1109/CINC.2009.111","DOIUrl":null,"url":null,"abstract":"This paper addresses the questions of improving convergence performance for back propagation (BP) neural network. For traditional BP neural network algorithm, the learning rate selection is depended on experience and trial. In this paper, based on Taylor formula the function relationship between the total quadratic training error change and connection weights and biases changes is obtained, and combined with weights and biases changes in batch BP learning algorithm, the formula for self-adaptive learning rate is given. Unlike existing algorithm, the self-adaptive learning rate depends on only neural network topology, training samples, average quadratic error and error curve surface gradient but not artificial selection. Simulation results show iteration times is significant less than that of traditional batch BP learning algorithm with constant learning rate.","PeriodicalId":173506,"journal":{"name":"2009 International Conference on Computational Intelligence and Natural Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"68","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 International Conference on Computational Intelligence and Natural Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CINC.2009.111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 68
Abstract
This paper addresses the questions of improving convergence performance for back propagation (BP) neural network. For traditional BP neural network algorithm, the learning rate selection is depended on experience and trial. In this paper, based on Taylor formula the function relationship between the total quadratic training error change and connection weights and biases changes is obtained, and combined with weights and biases changes in batch BP learning algorithm, the formula for self-adaptive learning rate is given. Unlike existing algorithm, the self-adaptive learning rate depends on only neural network topology, training samples, average quadratic error and error curve surface gradient but not artificial selection. Simulation results show iteration times is significant less than that of traditional batch BP learning algorithm with constant learning rate.