{"title":"反向传播神经网络的终端吸引子学习算法","authors":"S.-D. Wang, Chia-Hung Hsu","doi":"10.1109/IJCNN.1991.170401","DOIUrl":null,"url":null,"abstract":"Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"Terminal attractor learning algorithms for back propagation neural networks\",\"authors\":\"S.-D. Wang, Chia-Hung Hsu\",\"doi\":\"10.1109/IJCNN.1991.170401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm.<<ETX>>\",\"PeriodicalId\":211135,\"journal\":{\"name\":\"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1991.170401\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1991.170401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Terminal attractor learning algorithms for back propagation neural networks
Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm.<>