A. M. Abdelbar, M. A. El-Hemaly, Emad Andrews, D. Wunsch
{"title":"Negative reinforcement and backtrack-points for recurrent neural networks for cost-based abduction","authors":"A. M. Abdelbar, M. A. El-Hemaly, Emad Andrews, D. Wunsch","doi":"10.1109/IJCNN.2005.1555959","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1555959","url":null,"abstract":"Abduction is the process of proceeding from data describing a set of observations or events, to a set of hypotheses which best explains or accounts for the data. Cost-based abduction (CKA) is an AI formalism in which evidence to be explained is treated as a goal to be proven, proofs have costs based on how much needs to be assumed to complete the proof, and the set of assumptions needed to complete the least-cost proof are taken as the best explanation for the given evidence. In this paper, we introduce two techniques for improving the performance of high order recurrent networks (HORN) applied to cost-based abduction. In the backtrack-points technique, we use heuristics to recognize early that the network trajectory is moving in the wrong direction; we then restore the network state to a previously-stored point, and apply heuristic perturbations to nudge the network trajectory in a different direction. In the negative reinforcement technique, we add hyperedges to the network to reduce the attractiveness of local-minima. We apply these techniques on a 300-hypothesis, 900-rule particularly-difficult instance of CBA.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123715246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harmonic envelope prediction for realistic speech synthesis using kernel interpolation","authors":"P.-A. Fournier, Jean-Jules Brault","doi":"10.1109/IJCNN.2005.1556217","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556217","url":null,"abstract":"Harmonic and noise diphone concatenation is a proven method to obtain high-quality speech synthesis, but cannot be used when the basis corpus does not contain all the diphones needed. We propose a method to complete an individual's corpus using examples from other corpora. Parametrisation of five vowels from different speakers is done with an harmonic and noise model (HNM). We use multi-frame analysis (MFA) and smoothing kernels to estimate the harmonic power spectrum envelopes. Different kernels are compared to predict the harmonic envelopes of vowels using training data. We use euclidian distance to measure similarity between the real envelopes and the predicted ones. Synthesis of the interpolated vowels are then performed using learned optimal parameters. Our results show Gaussian kernels can achieve a 1.8 dB (34.4%) reduction of harmonic distorsion compared to the mean harmonic envelope estimator. As far as we know, there is no other literature on phoneme prediction for realistic speech synthesis.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"19 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132173195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowing your place: subfield specific involvement in hippocampal spatial processing","authors":"M. Hartley, Neil Taylor, John Taylor","doi":"10.1109/IJCNN.2005.1556382","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556382","url":null,"abstract":"Spatial navigation is a critical part of animal behavior. Experimental data show that some cells in the hippocampus of animals engaged in exploration respond preferentially to particular physical locations. These place cells give us an important indication of hippocampal participation in spatial processing. Recent work has examined differences in place field representations between hippocampal subfields. We discuss these findings and show, using a computational model, how known aspects of hippocampal physiology can explain these differences.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130215767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Random projections for assessing gene expression cluster stability","authors":"A. Bertoni, G. Valentini","doi":"10.1109/IJCNN.2005.1555821","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1555821","url":null,"abstract":"Clustering analysis of gene expression is characterized by the very high dimensionality and low cardinality of the data, and two important related topics are the validation and the estimate of the number of the obtained clusters. In this paper we focus on the estimate of the stability of the clusters. Our approach to this problem is based on random projections obeying the Johnson-Lindenstrauss lemma, by which gene expression data may be projected into randomly selected low dimensional suhspaces, approximately preserving pairwise distances between examples. We experiment with different types of random projections, comparing empirical and theoretical distortions induced by randomized embeddings between Euclidean metric spaces, and we present cluster-stability measures that may be used to validate and to quantitatively assess the reliability of the clusters obtained by a large class of clustering algorithms. Experimental results with high dimensional synthetic and DNA microarray data show the effectiveness of the proposed approach.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130494005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A model of Baldwin effect in populations of self-learning agents","authors":"V. Red'ko, O. P. Mosalov, D. Prokhorov","doi":"10.1109/IJCNN.2005.1556071","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556071","url":null,"abstract":"We study an evolution model of adaptive self-learning agents. The control system of agents is based on a neural network adaptive critic design. Each agent is a broker that predicts stock price changes and uses its predictions for action selection. The agent tries to get rich by buying and selling stocks. We demonstrate that the Baldwin effect takes place in our model, viz., originally acquired adaptive policy of an agent-broker becomes inherited in the course of the evolution. In addition, we compare agent behavioral tactics with searching behavior of simple animals.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127984585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are ARIMA neural network hybrids better than single models?","authors":"T. Taşkaya-Temizel, Khurshid Ahmad","doi":"10.1109/IJCNN.2005.1556438","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556438","url":null,"abstract":"Hybrid methods comprising autoregressive integrated moving average (ARIMA) and neural network models are generally favored against single neural network and single ARIMA models in the literature. The benefits of such methods appear to be substantial especially when dealing with non-stationary series: nonstationary linear component can be modeled using ARIMA and nonlinear component using neural networks. Our studies suggest that the use of a nonlinear component may degenerate the performance of such hybrids and that a simpler hybrid comprising linear AR model with a TDNN outperforms the more complex hybrid in tests on benchmark economic and financial time series.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131355194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An experimental study of several decision issues for feature selection with multi-layer perceptrons","authors":"E. Romero, J. Sopena","doi":"10.1109/IJCNN.2005.1556181","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556181","url":null,"abstract":"An experimental study of several decision issues for wrapper feature selection with multi-layer perceptrons is presented, namely the stopping criterion, the data set where the saliency is measured and the network retraining before computing the saliency. Experimental results with the sequential backward selection procedure indicate that the increase in the computational cost associated with retraining the network with every feature temporarily removed before computing the saliency is rewarded with a significant performance improvement. Despite being quite intuitive, this idea has been hardly used in practice. Regarding the stopping criterion and the data set where the saliency is measured, the procedure profits from measuring the saliency in a validation set, as reasonably expected. A somehow non-intuitive conclusion can be drawn by looking at the stopping criterion, where it is suggested that forcing overtraining may be as useful as early stopping.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129190336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Atmospheric correction and oceanic constituents retrieval, with a neuro-variational method","authors":"J. Brajard, S. Thiria, C. Jamet, C. Moulin","doi":"10.1109/IJCNN.2005.1556121","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556121","url":null,"abstract":"Ocean color sensors on board satellite measure the solar radiation reflected by the ocean and the atmosphere. This information, denoted reflectance, is affected for 90% by air molecules and aerosols in the atmosphere and for only 10% by water molecules and phytoplankton cells in the ocean. Our method focuses on the chlorophyll-a concentration (chl-a) retrieval, which is commonly used as a proxy for phytoplankton concentration. Our algorithm, denoted NeuroVaria, computes relevant atmospheric (Angstrom coefficient, optical thickness, single-scattering albedo) and oceanic parameters (chl-a, oceanic particulate scattering) by minimizing the difference over the whole spectrum (visible + near infrared) between the observed reflectance and the reflectance computed from artificial neural networks that have been learned with a radiative transfer model. NeuroVaria has been applied to SeaWiFS (sea-viewing wide field-of-view sensor) imagery in the Mediterranean sea. A comparison with in-situ measurements of the water-leaving reflectance shows that NeuroVaria enables to better reconstruct this component at 443 nm than the standard SeaWiFS processing. This leads to an improvement of the retrieval of the chl-a for the oligotrophic sea. This result is generalized to the entire Mediterranean sea through weekly maps of chl-a.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126791375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Torres-Sospedra, M. Fernández-Redondo, C. Hernández-Espinosa
{"title":"A research on combination methods for ensembles of multilayer feedforward","authors":"J. Torres-Sospedra, M. Fernández-Redondo, C. Hernández-Espinosa","doi":"10.1109/IJCNN.2005.1556011","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556011","url":null,"abstract":"As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. The two key factors to design an ensemble are how to train the individual networks and how to combine the different outputs of the networks to give a single output class. In this paper, we focus on the combination methods. We study the performance of fourteen different combination methods for ensembles of the type \"simple ensemble\" and \"decorrelated\". In the case of the \"simple ensemble\" and low number of networks in the ensemble, the method Zimmermann gets the best performance. When the number of networks is in the range of 9 and 20 the weighted average is the best alternative. Finally, in the case of the ensemble \"decorrelated\" the best performing method is averaging over a wide spectrum of the number of networks in the ensemble.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"4695 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126369984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A digital LSI architecture of elastic graph matching and its FPGA implementation","authors":"T. Nakano, T. Morie","doi":"10.1109/IJCNN.2005.1555935","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1555935","url":null,"abstract":"The elastic graph matching (EGM) is known as an excellent algorithm in applications of human face recognition. This paper proposes a digital LSI architecture for EGM and a face/object recognition system using its FPGA implementation. In the EGM, the matching evaluation point graph is distorted to find the best trade-off between better matching in the feature space and less distortion of the evaluation point graph. In the proposed architecture, cache memory stores calculation results at the evaluation points and those at their neighboring pixels to reduce the calculation amount. In the FPGA implementation with a system clock of 48 MHz, EGM between the input and one memorized image can be performed in about 1 ms.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126417794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}