{"title":"递归神经网络训练的新进展","authors":"Suwat Pattamavorakun, Suwarin Pattamavorakun","doi":"10.1109/SERA.2007.102","DOIUrl":null,"url":null,"abstract":"A new algorithm is proposed for improving the convergence of recurrent neural networks. This algorithm is obtained by combining the methods of weight update of Atiya-Parlos algorithm (the algorithm find the direction of weight change by approximation), and Y-N algorithm technique (the algorithm estimate fictitious target signals of hidden nodes to update hidden weight separately from output weights), and then by adding the error self-recurrent (ESR) network to improve the error functions (to speed up the convergence and not sensitive to initial weight by calculating the errors from output unit and then these errors are fed back for determining weight updates of output unit nodes). The results showed that both fully RNN and partially RNNs on some selected and the proposed algorithm could forecast the daily flow data quite satisfactorily.","PeriodicalId":181543,"journal":{"name":"5th ACIS International Conference on Software Engineering Research, Management & Applications (SERA 2007)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"New Developments on Recurrent Neural Networks Training\",\"authors\":\"Suwat Pattamavorakun, Suwarin Pattamavorakun\",\"doi\":\"10.1109/SERA.2007.102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new algorithm is proposed for improving the convergence of recurrent neural networks. This algorithm is obtained by combining the methods of weight update of Atiya-Parlos algorithm (the algorithm find the direction of weight change by approximation), and Y-N algorithm technique (the algorithm estimate fictitious target signals of hidden nodes to update hidden weight separately from output weights), and then by adding the error self-recurrent (ESR) network to improve the error functions (to speed up the convergence and not sensitive to initial weight by calculating the errors from output unit and then these errors are fed back for determining weight updates of output unit nodes). The results showed that both fully RNN and partially RNNs on some selected and the proposed algorithm could forecast the daily flow data quite satisfactorily.\",\"PeriodicalId\":181543,\"journal\":{\"name\":\"5th ACIS International Conference on Software Engineering Research, Management & Applications (SERA 2007)\",\"volume\":\"60 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"5th ACIS International Conference on Software Engineering Research, Management & Applications (SERA 2007)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SERA.2007.102\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"5th ACIS International Conference on Software Engineering Research, Management & Applications (SERA 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SERA.2007.102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
New Developments on Recurrent Neural Networks Training
A new algorithm is proposed for improving the convergence of recurrent neural networks. This algorithm is obtained by combining the methods of weight update of Atiya-Parlos algorithm (the algorithm find the direction of weight change by approximation), and Y-N algorithm technique (the algorithm estimate fictitious target signals of hidden nodes to update hidden weight separately from output weights), and then by adding the error self-recurrent (ESR) network to improve the error functions (to speed up the convergence and not sensitive to initial weight by calculating the errors from output unit and then these errors are fed back for determining weight updates of output unit nodes). The results showed that both fully RNN and partially RNNs on some selected and the proposed algorithm could forecast the daily flow data quite satisfactorily.