{"title":"A neural architecture for nonlinear adaptive filtering of time series","authors":"Nils Hoffmann, J. Larsen","doi":"10.1109/NNSP.1991.239488","DOIUrl":null,"url":null,"abstract":"A neural architecture for adaptive filtering which incorporates a modularization principle is proposed. It facilitates a sparse parameterization, i.e. fewer parameters have to be estimated in a supervised training procedure. The main idea is to use a preprocessor which determines the dimension of the input space and can be designed independently of the subsequent nonlinearity. Two suggestions for the preprocessor are presented: the derivative preprocessor and the principal component analysis. A novel implementation of fixed Volterra nonlinearities is given. It forces the boundedness of the polynominals by scaling and limiting the inputs signals. The nonlinearity is constructed from Chebychev polynominals. The authors apply a second-order algorithm for updating the weights for adaptive nonlinearities. Finally the simulations indicate that the two kinds of preprocessing tend to complement each other while there is no obvious difference between the performance of the ANL and FNL.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1991.239488","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
A neural architecture for adaptive filtering which incorporates a modularization principle is proposed. It facilitates a sparse parameterization, i.e. fewer parameters have to be estimated in a supervised training procedure. The main idea is to use a preprocessor which determines the dimension of the input space and can be designed independently of the subsequent nonlinearity. Two suggestions for the preprocessor are presented: the derivative preprocessor and the principal component analysis. A novel implementation of fixed Volterra nonlinearities is given. It forces the boundedness of the polynominals by scaling and limiting the inputs signals. The nonlinearity is constructed from Chebychev polynominals. The authors apply a second-order algorithm for updating the weights for adaptive nonlinearities. Finally the simulations indicate that the two kinds of preprocessing tend to complement each other while there is no obvious difference between the performance of the ANL and FNL.<>