{"title":"Enhanced Gradient Descent Algorithms for Complex-Valued Neural Networks","authors":"Călin-Adrian Popa","doi":"10.1109/SYNASC.2014.44","DOIUrl":null,"url":null,"abstract":"In this paper, enhanced gradient descent learning algorithms for complex-valued feed forward neural networks are proposed. The most known such enhanced algorithms for real-valued neural networks are: quick prop, resilient back propagation, delta-bar-delta, and Super SAB, and so it is natural to extend these learning methods to complex-valued neural networks, also. The complex variants of these four algorithms are presented, which are then exemplified on various function approximation problems, as well as on channel equalization and time series prediction applications. Experimental results show an important improvement in training and testing error over classical gradient descent and gradient descent with momentum algorithms.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SYNASC.2014.44","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, enhanced gradient descent learning algorithms for complex-valued feed forward neural networks are proposed. The most known such enhanced algorithms for real-valued neural networks are: quick prop, resilient back propagation, delta-bar-delta, and Super SAB, and so it is natural to extend these learning methods to complex-valued neural networks, also. The complex variants of these four algorithms are presented, which are then exemplified on various function approximation problems, as well as on channel equalization and time series prediction applications. Experimental results show an important improvement in training and testing error over classical gradient descent and gradient descent with momentum algorithms.