{"title":"基于牛顿法的动态误差控制训练算法","authors":"S. J. Huang, S. N. Koh, H. K. Tang","doi":"10.1109/IJCNN.1992.227085","DOIUrl":null,"url":null,"abstract":"The use of Newton's method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton's method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton's method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Training algorithm based on Newton's method with dynamic error control\",\"authors\":\"S. J. Huang, S. N. Koh, H. K. Tang\",\"doi\":\"10.1109/IJCNN.1992.227085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of Newton's method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton's method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton's method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods.<<ETX>>\",\"PeriodicalId\":286849,\"journal\":{\"name\":\"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks\",\"volume\":\"120 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1992.227085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1992.227085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Training algorithm based on Newton's method with dynamic error control
The use of Newton's method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton's method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton's method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods.<>