{"title":"非线性时间序列预测的递归神经网络比较研究","authors":"S.S. Rao, S. Sethuraman, V. Ramamurti","doi":"10.1109/NNSP.1992.253659","DOIUrl":null,"url":null,"abstract":"The performance of recurrent neural networks (RNNs) is compared with those of conventional nonlinear prediction schemes, such as a Kalman predictor (KP) based on a state-dependent model and a second-order Volterra filter. Simulation results on some typical nonlinear time series data indicate that the neural network can predict with accuracies on a par with the KP. It is noted that a higher-order extended Kalman filter or a Volterra model might provide a better performance than the ones considered. The network requires very few sweeps through the training data, though this will be computationally much more intensive than that required by conventional schemes. The authors discuss the advantages and drawbacks of each of the predictors considered.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"A recurrent neural network for nonlinear time series prediction-a comparative study\",\"authors\":\"S.S. Rao, S. Sethuraman, V. Ramamurti\",\"doi\":\"10.1109/NNSP.1992.253659\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The performance of recurrent neural networks (RNNs) is compared with those of conventional nonlinear prediction schemes, such as a Kalman predictor (KP) based on a state-dependent model and a second-order Volterra filter. Simulation results on some typical nonlinear time series data indicate that the neural network can predict with accuracies on a par with the KP. It is noted that a higher-order extended Kalman filter or a Volterra model might provide a better performance than the ones considered. The network requires very few sweeps through the training data, though this will be computationally much more intensive than that required by conventional schemes. The authors discuss the advantages and drawbacks of each of the predictors considered.<<ETX>>\",\"PeriodicalId\":438250,\"journal\":{\"name\":\"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.1992.253659\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1992.253659","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A recurrent neural network for nonlinear time series prediction-a comparative study
The performance of recurrent neural networks (RNNs) is compared with those of conventional nonlinear prediction schemes, such as a Kalman predictor (KP) based on a state-dependent model and a second-order Volterra filter. Simulation results on some typical nonlinear time series data indicate that the neural network can predict with accuracies on a par with the KP. It is noted that a higher-order extended Kalman filter or a Volterra model might provide a better performance than the ones considered. The network requires very few sweeps through the training data, though this will be computationally much more intensive than that required by conventional schemes. The authors discuss the advantages and drawbacks of each of the predictors considered.<>