{"title":"Feature-locked loop and its application to image databases","authors":"A. Sherstinsky, Rosalind W. Picard","doi":"10.1109/NNSP.1995.514916","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514916","url":null,"abstract":"We present a new dynamical system called the \"feature-locked loop\". The inputs to this feedback neural network are a set of feature vectors and a one-parameter function that characterizes the data. We show that the feature-locked loop is locally stable for one example of the characteristic function and determines the value of its unknown parameter. We apply this property of the feature-locked loop to the problem of sorting textures by their similarity. We use the feature-locked loop and a priori information to quantify the degree of similarity between the input image and the reported set of image as a whole. The prior knowledge is encoded in the form of the one-parameter function and a general assumption about the number of perceptual outliers in the reported set. The unknown parameter, computed by the feature-locked loop, is then related to the entire set of image features produced by the retrieval.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132608407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new learning scheme for the recognition of dynamical handwritten characters","authors":"F. Andrianasy, M. Milgram","doi":"10.1109/NNSP.1995.514911","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514911","url":null,"abstract":"Vector comparison is essential in pattern recognition. Numerous methods based on distance computation are available to carry out such comparison. Unfortunately most of them are applicable only if the vectors are of the same length or do not take into account components misalignment. This paper presents a new distance between two representations called the elastic distance and based on the dynamic programming technique. Properties are studied. We show that it leads to a variant of the least vector quantisation technique that learns the best representants of a group of prototypes. A new centroid computation algorithm is proposed. Finally, the learning scheme algorithm has been successfully applied on an online numerical handwritten character recognition problem using a previously computed centroid of a set of prototypes.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133902522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recursive nonlinear identification using multiple model algorithm","authors":"V. Kadirkamanathan","doi":"10.1109/NNSP.1995.514891","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514891","url":null,"abstract":"In this paper, the multiple model algorithm is used in deriving recursive algorithms for the identification of nonlinear systems. The radial basis function (RBF) networks with only linear weights requiring estimation combined with the Kalman filter algorithm forms the essence of the identification algorithm. Multiple networks are used to identify the multi-modes of the system under a Markovian assumption, the model estimation and selection being carried out on-line. Both, 'hard' and 'soft' competition based estimation schemes are developed where in the former, the most probable network is adapted by the Kalman filter and in the latter all networks are adapted by appropriate weighting of the observation.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117149183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning a distribution-based face model for human face detection","authors":"K. Sung, S. Poggio","doi":"10.1109/NNSP.1995.514914","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514914","url":null,"abstract":"We present a distribution-based modeling cum example-based learning approach for detecting human faces in cluttered scenes. The distribution-based model captures complex variations in human face patterns that cannot be adequately described by classical pictorial template-based matching techniques or geometric model-based pattern recognition schemes. We also show how explicitly modeling the distribution of certain \"facelike\" nonface patterns can help improve classification results.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114303950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A numerical approach for estimating higher order spectra using neural network autoregressive model","authors":"N. Toda, S. Usui","doi":"10.1109/NNSP.1995.514888","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514888","url":null,"abstract":"A method for parametric estimation of higher order spectra of time series using a nonlinear autoregressive model based on multi-layered neural networks (NNAR model) is presented. In real world problems there exist signals that can not be described sufficiently by linear time series models such as AR or ARMA models. In order to characterize such signals, several nonlinear time series models have been investigated in recent years. However, in contrast with the case of linear models, there are a few parametric approaches that estimate the higher order statistical characteristics of observed time series using such nonlinear time series models. It is very difficult to derive analytically explicit formulations of higher order spectra from the expressions of such nonlinear time series models. In this study, employing numerical techniques, the authors construct a parametric estimator of higher order spectra. It consists of the following steps: 1. training an NNAR model on the given time series, 2. iteration of numerical integrals for solving the joint probability density function, 3. calculation of higher order cumulant functions by renewal equations based on the joint probability density function solved in 2., and 4. multidimensional discrete Fourier transforms of higher order cumulant functions calculated in 3. The authors also show that any NNAR model with finite valued weights satisfies a sufficient condition of convergence.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127529290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semiautomated extraction of decision relevant features from a raw data based artificial neural network demonstrated by the problem of saccade detection in EOG recordings of smooth pursuit eye movements","authors":"P.K. Tigges, N. Kathmann, R. R. Engel","doi":"10.1109/NNSP.1995.514921","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514921","url":null,"abstract":"Visual identification of saccades in electrooculographic (EOG) recordings of smooth pursuit eye movements (SPEM) is a very time consuming process for the individual experts. Algorithmic approaches to overcome this problem automatically produce high rates of false positive errors. Artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low. An automated decision process based on modified raw data inputs showed successful proceeding of a backpropagation ANN with an overall performance of 87% correct classifications with previously unknown data. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a sparse and efficient data coding, based on a combination of expert knowledge and the internal representation structures of the ANN. Data coding obtained by this semiautomated procedure yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% and reached an overall correct classification of 92% with unknown data. The proposed method of extracting internal ANN knowledge is not restricted to EOG recordings, and could be used in various fields of signal analysis.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125205259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multiple scale neural system for boundary and surface representation of SAR data","authors":"S. Grossberg, E. Mingolla, J. Williamson","doi":"10.1109/NNSP.1995.514905","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514905","url":null,"abstract":"A neural network model of boundary segmentation and surface representation is developed to process images containing range data gathered by a synthetic aperture radar (SAR) sensor. SAR sensors can produce range imagery of high spatial resolution under difficult weather conditions but the data presents some interpretation difficulties. These include the large dynamic range of the sensor signal, which requires some type of nonlinear compression. Another problem is image speckle, which is generated by coherent processing of radar signals and has characteristics of random multiplicative noise. Our approach uses the form-sensitive operations of a neural network model in order to detect and enhance structure based on information over large, variably sized and variably shaped regions of the image. In particular, the multiscale implementation of the neural model reported here is capable of exploiting and combining information from several nested neighborhoods of a given image location to determine the final intensity value to be displayed for that pixel. By \"neighborhood\" is here meant a region whose form varies as a function of nearby image data, not some fixed (weighted) radial function for all pixel locations.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132714303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural networks for function approximation","authors":"H. Mhaskar, L. Khachikyan","doi":"10.1109/NNSP.1995.514875","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514875","url":null,"abstract":"We describe certain results of Mhaskar concerning the approximation capabilities of neural networks with one hidden layer. In particular, these results demonstrate the construction of neural networks evaluating a squashing function or a radial basis function for optimal approximation of the Sobolev spaces. We also report on the application of some of these ideas in the construction of general-purpose networks for the prediction of time series, when the number of independent variables is known in advance, such as the Mackey-Glass series or the flour data.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134498664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active learning the weights of a RBF network","authors":"K. Sung, P. Niyogi","doi":"10.1109/NNSP.1995.514877","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514877","url":null,"abstract":"We describe a principled strategy to sample functions optimally for function approximation tasks. The strategy works within a Bayesian framework and uses ideas from optimal experiment design to evaluate the potential utility of new data points. We consider an application of this general framework for active learning the weight coefficients of a Gaussian radial basis function (RBF) network. We also derive some sufficiency conditions on the learning problem for which there are analytical solution to the data sampling procedure.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"27 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimum lag and subset selection for a radial basis function equaliser","authors":"E. Chng, B. Mulgrew, Sheng Chen, Garth A. Gibson","doi":"10.1109/NNSP.1995.514934","DOIUrl":"https://doi.org/10.1109/NNSP.1995.514934","url":null,"abstract":"This paper examines the application of the radial basis function (RBF) network to the modelling of the Bayesian equaliser. In particular, the authors study the effects of delay order d on decision boundary and attainable bit error rate (BER) performance. To determine the optimum delay parameter for minimum BER performance, a simple BER estimator is proposed. The implementation complexity of the RBF network grows exponentially with respect to the number of input nodes. As such, the full implementation of the RBF network to realise the Bayesian solution may not be feasible. To reduce some of the implementation complexity, the authors propose an algorithm to perform subset model selection. The authors' results indicate that it is possible to reduce model size without significant degradation in BER performance.","PeriodicalId":403144,"journal":{"name":"Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116633673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}