{"title":"混沌信号的递归时延神经网络仿真","authors":"M. Davenport, S. P. Day","doi":"10.1109/NNSP.1992.253667","DOIUrl":null,"url":null,"abstract":"The authors describe a method for training a dispersive neural network to imitate a chaotic signal without using any knowledge of how the signal was generated. In a dispersive network, each connection has both an adaptable time delay and an adaptable weight. The network was first trained as a feedforward signal predictor and then connected recurrently for signal synthesis. The authors evaluate the performance of a network with twenty hidden nodes, using the Mackey-Glass (1977) chaotic time series as a training signal, and then compare it to a similar network without internal time delays. The fidelity of the synthesized signal is investigated for progressively longer training times, and for networks trained with and without momentum.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Chaotic signal emulation using a recurrent time delay neural network\",\"authors\":\"M. Davenport, S. P. Day\",\"doi\":\"10.1109/NNSP.1992.253667\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The authors describe a method for training a dispersive neural network to imitate a chaotic signal without using any knowledge of how the signal was generated. In a dispersive network, each connection has both an adaptable time delay and an adaptable weight. The network was first trained as a feedforward signal predictor and then connected recurrently for signal synthesis. The authors evaluate the performance of a network with twenty hidden nodes, using the Mackey-Glass (1977) chaotic time series as a training signal, and then compare it to a similar network without internal time delays. The fidelity of the synthesized signal is investigated for progressively longer training times, and for networks trained with and without momentum.<<ETX>>\",\"PeriodicalId\":438250,\"journal\":{\"name\":\"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.1992.253667\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1992.253667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Chaotic signal emulation using a recurrent time delay neural network
The authors describe a method for training a dispersive neural network to imitate a chaotic signal without using any knowledge of how the signal was generated. In a dispersive network, each connection has both an adaptable time delay and an adaptable weight. The network was first trained as a feedforward signal predictor and then connected recurrently for signal synthesis. The authors evaluate the performance of a network with twenty hidden nodes, using the Mackey-Glass (1977) chaotic time series as a training signal, and then compare it to a similar network without internal time delays. The fidelity of the synthesized signal is investigated for progressively longer training times, and for networks trained with and without momentum.<>