R. Gil-Pita, P. J. Amores, M. Rosa-Zurera, F. López-Ferreras
{"title":"Improving neural classifiers for ATR using a kernel method for generating synthetic training sets","authors":"R. Gil-Pita, P. J. Amores, M. Rosa-Zurera, F. López-Ferreras","doi":"10.1109/NNSP.2002.1030054","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030054","url":null,"abstract":"An important problem with the use of neural networks in HRR radar target classification is the difficulty in obtaining training data. Training sets are small because of this, making generalization to new data difficult. In order to improve generalization capability, synthetic radar targets are obtained using a novel kernel method for estimating the probability density function of each class of radar targets. Multivariate Gaussians whose parameters are a function of position and data distribution are used as kernels. In order to assess the accuracy of the estimate, the maximum a posteriori criterion has been used in radar target classification, and compared with the k-nearest-neighbour classifier. The proposed method performs better than the k-nearest-neighbour classifier, demonstrating the accuracy of the estimate. After that, the estimated probability density functions are used to classify the synthetic data in order to use a supervised training algorithm for neural networks. The obtained results show that neural networks perform better if this strategy is used to increase the number of training data. Furthermore, computational complexity is dramatically reduced compared with that of the k-nearest neighbour classifier.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"15 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128254244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A robust canonical correlation neural network","authors":"Zhenkun Gou, C. Fyfe","doi":"10.1109/NNSP.2002.1030035","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030035","url":null,"abstract":"We review a neural implementation of canonical correlation analysis and show, using ideas suggested by ridge regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing partial least squares regression (at one extreme) to canonical correlation analysis (at the other) and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, the algorithm acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130606849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin C. Sanchez, Sung-Phil Kim, Deniz Erdoğmuş, Y. Rao, J. Príncipe, J. Wessberg, M. Nicolelis
{"title":"Input-output mapping performance of linear and nonlinear models for estimating hand trajectories from cortical neuronal firing patterns","authors":"Justin C. Sanchez, Sung-Phil Kim, Deniz Erdoğmuş, Y. Rao, J. Príncipe, J. Wessberg, M. Nicolelis","doi":"10.1109/NNSP.2002.1030025","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030025","url":null,"abstract":"Linear and nonlinear (TDNN) models have been shown to estimate hand position using populations of action potentials collected in the pre-motor and motor cortical areas of a primate's brain. One of the applications of this discovery is to restore movement in patients suffering from paralysis. For real-time implementation of this technology, reliable and accurate signal processing models that produce small error variance in the estimated positions are required. In this paper, we compare the mapping performance of the FIR filter, gamma filter and recurrent neural network (RNN) in the peaks of reaching movements. Each approach has strengths and weaknesses that are compared experimentally. The RNN approach shows very accurate peak position estimations with small error variance.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122109258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modified Kalman filter based method for training state-recurrent multilayer perceptrons","authors":"Deniz Erdoğmuş, Justin C. Sanchez, J. Príncipe","doi":"10.1109/NNSP.2002.1030033","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030033","url":null,"abstract":"Kalman filter based training algorithms for recurrent neural networks provide a clever alternative to the standard backpropagation in time. However, these algorithms do not take into account the optimization of the hidden state variables of the recurrent network. In addition, their formulation requires Jacobian evaluations over the entire network, adding to their computational complexity. We propose a spatial-temporal extended Kalman filter algorithm for training recurrent neural network weights and internal states. This new formulation also reduces the computational complexity of Jacobian evaluations drastically by decoupling the gradients of each layer. Monte Carlo comparisons with backpropagation through time point out the robust and fast convergence of the algorithm.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient total least squares method for system modeling using minor component analysis","authors":"Y. Rao, J. Príncipe","doi":"10.1109/NNSP.2002.1030037","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030037","url":null,"abstract":"We present two algorithms to solve the total least-squares (TLS) problem. The algorithms are on-line with O(N/sup 2/) and O(N) complexity. The convergence of the algorithms is significantly faster than the traditional methods. A mathematical analysis of convergence is also provided along with simulations to substantiate the claims. We also apply the TLS algorithms for FIR system identification with known model order in the presence of noise.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123548728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Schreiter, U. Ramacher, A. Heittmann, D. Matolin, R. Schüffny
{"title":"Analog implementation for networks of integrate-and-fire neurons with adaptive local connectivity","authors":"J. Schreiter, U. Ramacher, A. Heittmann, D. Matolin, R. Schüffny","doi":"10.1109/NNSP.2002.1030077","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030077","url":null,"abstract":"An analog VLSI implementation for pulse coupled neural networks of leakage free integrate-and-fire neurons with adaptive connections is presented. Weight adaptation is based on existing adaptation rules for image segmentation. Although both integrate-and-fire neurons and adaptive weights can be implementation only approximately, simulations have shown, that synchronization properties of the original adaptation rules are preserved.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114342370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Szymkowiak-Have, J. Larsen, L. K. Hansen, P. Philipsen, E. Thieden, H. Wulf
{"title":"Clustering of Sun exposure measurements","authors":"Anna Szymkowiak-Have, J. Larsen, L. K. Hansen, P. Philipsen, E. Thieden, H. Wulf","doi":"10.1109/NNSP.2002.1030090","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030090","url":null,"abstract":"In a medically motivated Sun-exposure study, questionnaires concerning Sun-habits were collected from a number of subjects together with UV radiation measurements. This paper focuses on identifying clusters in the heterogeneous set of data for the purpose of understanding possible relations between Sun-habits exposure and eventually assessing the risk of skin cancer. A general probabilistic framework originally developed for text and Web mining is demonstrated to be useful for clustering of behavioral data. The framework combines principal component subspace projection with probabilistic clustering based on the generalizable Gaussian mixture model.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124421215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metric-based model selection for time-series forecasting","authors":"Yoshua Bengio, Nicolas Chapados","doi":"10.1109/NNSP.2002.1030013","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030013","url":null,"abstract":"Metric-based methods, which use unlabeled data to detect gross differences in behavior away from the training points, have recently been introduced for model selection, often yielding very significant improvements over alternatives (including cross-validation). We introduce extensions that take advantage of the particular case of time-series data in which the task involves prediction with a horizon h. The ideas are: (i) to use at t the h unlabeled examples that precede t for model selection, and (ii) take advantage of the different error distributions of cross-validation and the metric methods. Experimental results establish the effectiveness of these extensions in the context of feature subset selection.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132528488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scaling of a length scale for regression and prediction","authors":"T. Aida","doi":"10.1109/NNSP.2002.1030029","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030029","url":null,"abstract":"We analyze the prediction from noised data, based on a regression formulation of the problem. For the regression, we construct a model with a length scale to smooth the data, which is determined by the variance of noise and the speed of the variation of original signals. The model is found to be effective also for prediction. This is because it decreases an uncertain region near a boundary as the speed of the variation of original signals increases, which is a crucial property for accurate prediction.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"312 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised reduction of the dimensionality followed by supervised learning with a perceptron improves the classification of conditions in DNA microarray gene expression data","authors":"L. Conde, Á. Mateos, Javier Herrero, J. Dopazo","doi":"10.1109/NNSP.2002.1030019","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030019","url":null,"abstract":"This manuscript describes a combined approach of unsupervised clustering followed by supervised learning that provides an efficient classification of conditions in DNA array gene expression experiments (different cell lines including some cancer types, in the cases shown). Firstly the dimensionality of the dataset of gene expression profiles is reduced to a number of non-redundant clusters of co-expressing genes using an unsupervised clustering algorithm, the Self Organizing Tree Algorithm (SOTA), a hierarchical version of Self Organizing Maps (SOM). Then, the average values of these clusters are used for the training of a perception that produces a very efficient classification of the conditions. This way of reducing the dimensionality of the data set seems to perform better than other ones previously proposed such as PCA. In addition, the weights that connect the gene clusters to the different experimental conditions can be used to assess the relative importance of the genes in the definition of these classes. Finally, Gene Ontology (GO) terms are used to infer a possible biological role for these groups of genes and to asses the validity of the classification from a biological point of view.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"4 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120819729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}