{"title":"Classification of simulated radar imagery using lateral inhibition neural networks","authors":"C. Bachmann, S. Musman, A. Schultz","doi":"10.1109/NNSP.1992.253685","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253685","url":null,"abstract":"The use of neural networks for the classification of simulated inverse synthetic aperture radar imagery is investigated. Symmetries of the artificial imagery make the use of localized moments a convenient preprocessing tool for the inputs to a neural network. A database of simulated targets was obtained by warping dynamical models to representative angles and generating images with differing target motions. Ordinary backward propagation (BP) and some variants of BP which incorporate lateral inhibition (LIBP) obtain a generalization rate of up to approximately 77% for novel data not used during training, a rate which is comparable to the mean level of classification accuracy that trained human observers obtained from the unprocessed simulated imagery. The authors also describe preliminary results for an unsupervised lateral inhibition network based on the BCM neuron. The feature vectors found by BCM are qualitatively different from those of BP and LIBP.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116842378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimal classification error optimization for a speaker mapping neural network","authors":"M. Sugiyama, K. Kurinami","doi":"10.1109/NNSP.1992.253689","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253689","url":null,"abstract":"The authors prepose a novel optimization technique for speaker mapping neural network training using the minimal classification error criterion. The conventional speaker mapping neural networks were trained under minimal distortion criteria. The minimal classification error optimization technique is applied to train the speaker mapping neural network. The authors describe the speaker mapping neural network and the minimal classification error optimization technique, and formulate and derive the minimal classification optimization technique in the speaker mapping neural network and a novel backpropagation algorithm. Vowel classification experiments are carried out, showing the effectiveness of the proposed algorithm. Experiments on speaker mapping with five vowels were performed and achieved a classification accuracy of 99.6% for training data and 97.4% for test data.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127275472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive template method for speech recognition","authors":"Y. Liu, Y. Lee, H. Chen, G. Sun","doi":"10.1109/NNSP.1992.253703","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253703","url":null,"abstract":"An adaptive template method for pattern recognition is proposed. The template adaptation algorithm is derived based on minimizing the classification error of the classifier. The authors have applied this method to a multispeaker English E-set recognition experiment and achieved a 90.38% average recognition rate with only one template for each letter. This indicates that the derived templates are able to capture the speaker-invariant features of speech signals.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123771174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volker Tresp, I. Leuthausser, M. Schlang, R. Neuneier, K. Abraham-Fuchs, W. Harer
{"title":"An efficient model for systems with complex responses (neural network architecture for nonlinear filtering)","authors":"Volker Tresp, I. Leuthausser, M. Schlang, R. Neuneier, K. Abraham-Fuchs, W. Harer","doi":"10.1109/NNSP.1992.253663","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253663","url":null,"abstract":"Presents a neural network architecture for a restricted class of nonlinear filtering applications. The filter architecture is particularly suited for biomedical and technical applications that require long and complex system responses. The filter architecture was successfully used in a biomedical application for the removal of the cardiac interference from magnetoencephalographic (MEG) data and performed better than standard linear filters and the time-delay neural network.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115063840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Schlang, Volker Tresp, K. Abraham-Fuchs, W. Harer, P. Weismuller
{"title":"Neural networks for segmentation and clustering of biomagnetical signals","authors":"M. Schlang, Volker Tresp, K. Abraham-Fuchs, W. Harer, P. Weismuller","doi":"10.1109/NNSP.1992.253678","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253678","url":null,"abstract":"When measuring biomagnetic signals the amount of data required is very large due to modern multichannel sensor arrays. Using the example of the magnetocardiogram (MCG), the authors show how these data can be automatically segmented and clustered with the help of neural algorithms. Self-organizing maps are not suitable for this application due to the character of the measured data. The data are compressed with the help of a special neural network. A very fast learning algorithm is used in the training phase, requiring substantially less computing power than conventional methods. Combined with a hierarchical cluster algorithm, a recognition rate of 100% of extrasystoles in MCG data was achieved.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122456549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction with recurrent networks","authors":"N. H. Wulff, J. Hertz","doi":"10.1109/NNSP.1992.253666","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253666","url":null,"abstract":"The authors study extrapolation of time series using recurrent neural networks. They use the real-time recurrent learning algorithm introduced by R. J. Williams and D. Zipser (1989), both in the original form for first order nets and in a form for second order nets. It is shown that both the first order and the second order nets are able to learn to simulate the Mackey-Glass series. The prediction quality of the results is comparable to that from feedforward nets.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129786454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A generalization error estimate for nonlinear systems","authors":"Jan Larsen","doi":"10.1109/NNSP.1992.253710","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253710","url":null,"abstract":"A new estimate (GEN) of the generalization error is presented. The estimator is valid for both incomplete and nonlinear models. An incomplete model is characterized in that it does not model the actual nonlinear relationship perfectly. The GEN estimator has been evaluated by simulating incomplete models of linear and simple neural network systems. Within the linear system GEN is compared to the final prediction error criterion and the leave-one-out cross-validation technique. It was found that the GEN estimate of the true generalization error is less biased on the average. It is concluded that GEN is an applicable alternative in estimating the generalization at the expense of an increased complexity.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129206825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive segmentation of textured images using linear prediction and neural networks","authors":"S. Kollias, L. Sukissian","doi":"10.1109/NNSP.1992.253672","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253672","url":null,"abstract":"An adaptive technique for classifying and segmenting textured images is presented. This technique uses an efficient least squares algorithm for recursive estimation of two-dimensional autoregressive texture models and neural networks for recursive classification of the models. A network with fixed, but space-varying, interconnection weights is used to optimally select a small representative set of these models, while a network with adaptive weights is appropriately trained and used to recursively classify and segment the image. An online modification of the latter network architecture is proposed for segmenting images that comprise textures for which no prior information exists. Experimental results are given which illustrate the ability of the method to classify and segment textured images in an effective way.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121479584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A simple genetic algorithm applied to discontinuous regularization","authors":"J. B. Jensen, M. Nielsen","doi":"10.1109/NNSP.1992.253706","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253706","url":null,"abstract":"A simple genetic algorithm without mutation has been applied to discontinuous regularization. The relative slope of the energy-to-fitness function has been introduced as a measure of the rate of convergence. The intuitively better rate of convergence (slow in the beginning, faster in the end) has been shown to be superior to an exponential transformation-function in the present case. A probabilistic model of the performance of the algorithm has been introduced. From this model it has been found that a division into subpopulations decreases the performance, unless more than one computer is available.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134301737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of chaotic time series using recurrent neural networks","authors":"J. Kuo, J.C. Principle, B. de Vries","doi":"10.1109/NNSP.1992.253669","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253669","url":null,"abstract":"The authors propose to train and use a recurrent artificial neural network (ANN) to predict a chaotic time series. Instead of training the network with the next sample in the time series as is normally done, a sequence of samples that follows the present sample will be utilized. Dynamical parameters extracted from the time series provide the information to set the length of these training sequences. The proposed method has been applied to predict both periodic and chaotic time series, and is superior to the conventional ANN approach.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115891332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}