{"title":"A novel facial feature extraction method based on ICM network for affective recognition","authors":"F. Mokhayeri, M. Akbarzadeh-T.","doi":"10.1109/IJCNN.2011.6033469","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033469","url":null,"abstract":"This paper presents a facial expression recognition approach to recognize the affective states. Feature extraction is a vital step in the recognition of facial expressions. In this work, a novel facial feature extraction method based on Intersecting Cortical Model (ICM) is proposed. The ICM network which is a simplified model of Pulse-Coupled Neural Network (PCNN) model has great potential to perform pixel grouping. In the proposed method the normalized face image is segmented into two regions including mouth, eyes using fuzzy c-means clustering (FCM). Segmented face images are imported into an ICM network with 300 iteration number and pulse image produced by the ICM network is chosen as the face code, then the support vector machine (SVM) is trained for discrimination of different expressions to distinguish the different affective states. In order to evaluate the performance of the proposed algorithm, the face image dataset is constructed and the proposed algorithm is used to classify seven basic expressions including happiness, sadness, fear, anger, surprise and hate The experimental results confirm that ICM network has great potential for facial feature extraction and the proposed method for human affective recognition is promising. Fast feature extraction is the most advantage of this method which can be useful for real world application.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"34 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125615036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Organizing Neural Population Coding for improving robotic visuomotor coordination","authors":"Tao Zhou, P. Dudek, Bertram E. Shi","doi":"10.1109/IJCNN.2011.6033393","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033393","url":null,"abstract":"We present an extension of Kohonen's Self Organizing Map (SOM) algorithm called the Self Organizing Neural Population Coding (SONPC) algorithm. The algorithm adapts online the neural population encoding of sensory and motor coordinates of a robot according to the underlying data distribution. By allocating more neurons towards area of sensory or motor space which are more frequently visited, this representation improves the accuracy of a robot system on a visually guided reaching task. We also suggest a Mean Reflection method to solve the notorious border effect problem encountered with SOMs for the special case where the latent space and the data space dimensions are the same.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Natural language generation using automatically constructed lexical resources","authors":"Naho Ito, M. Hagiwara","doi":"10.1109/IJCNN.2011.6033329","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033329","url":null,"abstract":"One of the practical targets of neural network research is to enable conversation ability with humans. This paper proposes a novel natural language generation method using automatically constructed lexical resources. In the proposed method, two lexical resources are employed: Kyoto University's case frame data and Google N-gram data. Word frequency in case frame can be regarded to be obtained by Hebb's learning rule. The co-occurence frequency of Google N-gram can be considered to be gained by an associative memory. The proposed method uses words as an input. It generates a sentence from case frames, using Google N-gram as to consider co-occurrence frequency between words. We only use lexical resources which are constructed automatically. Therefore the proposed method has high coverage compared to the other methods using manually constructed templates. We carried out experiments to examine the quality of generated sentences and obtained satisfactory results.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new algorithm for graph mining","authors":"B. Chandra, Shalini Bhaskar","doi":"10.1109/IJCNN.2011.6033330","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033330","url":null,"abstract":"Mining frequent substructures has gained importance in the recent past. Number of algorithms has been presented for mining undirected graphs. Focus of this paper is on mining frequent substructures in directed labeled graphs since it has variety of applications in the area of biology, web mining etc. A novel approach of using equivalence class principle has been proposed for reducing the size of the graph database to be processed for finding frequent substructures. For generating candidate substructures a combination of L-R join operation, serial and mixed extensions have been carried out. This avoids missing of any candidate substructures and at the same time candidate substructures that have high probability of becoming frequent are generated.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132749789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Shimizu, R. Saegusa, Shuhei Ikemoto, H. Ishiguro, G. Metta
{"title":"Adaptive self-protective motion based on reflex control","authors":"T. Shimizu, R. Saegusa, Shuhei Ikemoto, H. Ishiguro, G. Metta","doi":"10.1109/IJCNN.2011.6033596","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033596","url":null,"abstract":"This paper describes a self-protective whole-body control method for humanoid robots. A set of postural reactions are used to create whole-body movements. A set of reactions is merged to cope with a general falling down direction, while allowing the upper limbs to contact safely with obstacles. The collision detection is achieved by force sensing. We verified that our method generates the self-protective motion in real time, and reduced the impact energy in multiple situations by simulator. We also verified that our systems works adequately in real-robot.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133122791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finding dependent and independent components from two related data sets","authors":"J. Karhunen, T. Hao","doi":"10.1109/IJCNN.2011.6033257","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033257","url":null,"abstract":"Independent component analysis (ICA) and blind source separation (BSS) are usually applied to a single data set. Both these techniques are nowadays well understood, and several good methods based on somewhat varying assumptions on the data are available. In this paper, we consider an extension of ICA and BSS for separating mutually dependent and independent components from two different but related data sets. This problem is important in practice, because such data sets are common in real-world applications. We propose a new method which first uses canonical correlation analysis (CCA) for detecting subspaces of independent and dependent components. Standard ICA and BSS methods can after this be used for final separation of these components. The proposed method performs excellently for synthetic data sets for which the assumed data model holds exactly, and provides meaningful results for real-world robot grasping data. The method has a sound theoretical basis, and it is straightforward to implement and computationally not too demanding. Moreover, the proposed method has a very important by-product: its improves clearly the separation results provided by the FastICA and UniBSS methods that we have used in our experiments. Not only are the signal-to-noise ratios of the separated sources often clearly higher, but CCA preprocessing also helps FastICA to separate sources that it alone is not able to separate.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133211857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditional multi-output regression","authors":"Chao Yuan","doi":"10.1109/IJCNN.2011.6033220","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033220","url":null,"abstract":"In multi-output regression, the goal is to establish a mapping from inputs to multivariate outputs that are often assumed unknown. However, in practice, some outputs may become available. How can we use this extra information to improve our prediction on the remaining outputs? For example, can we use the job data released today to better predict the house sales data to be released tomorrow? Most previous approaches use a single generative model to model the joint predictive distribution of all outputs, based on which unknown outputs are inferred conditionally from the known outputs. However, learning such a joint distribution for all outputs is very challenging and also unnecessary if our goal is just to predict each of the unknown outputs. We propose a conditional model to directly model the conditional probability of a target output on both inputs and all other outputs. A simple generative model is used to infer other outputs if they are unknown. Both models only consist of standard regression predictors, for example, Gaussian process, which can be easily learned.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133634968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sparse analog associative memory via L1-regularization and thresholding","authors":"R. Chalasani, J. Príncipe","doi":"10.1109/IJCNN.2011.6033470","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033470","url":null,"abstract":"The CA3 region of the hippocampus acts as an auto-associative memory and is responsible for the consolidation of episodic memory. Two important characteristics of such a network is the sparsity of the stored patterns and the nonsaturating firing rate dynamics. To construct such a network, here we use a maximum a posteriori based cost function, regularized with L1-norm, to change the internal state of the neurons. Then a linear thresholding function is used to obtain the desired output firing rate. We show how such a model leads to a more biologically reasonable dynamic model which can produce a sparse output and recalls with good accuracy when the network is presented with a corrupted input.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132198252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Jordan network for nonlinear time series prediction","authors":"Q. Song","doi":"10.1109/IJCNN.2011.6033550","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033550","url":null,"abstract":"We propose a robust initialization of Jordan network with recurrent constrained learning (RIJNRCL) algorithm for multilayered recurrent neural networks (RNNs). This novel algorithm is based on the constrained learning concept of Jordan network with recurrent sensitivity and weight convergence analysis to obtain a tradeoff between training and testing errors. In addition to use classical techniques of the adaptive learning rate and adaptive dead zone, RIJNRCL uses a recurrent constrained parameter matrix to switch off excessive contribution of the hidden layer neurons based on weight convergence and stability conditions of the the multilayered RNNs.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115056407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. C. Luna-Sanchez, E. Gómez-Ramírez, K. Najim, E. Ikonen
{"title":"Forecasting time series with a logarithmic model for the Polynomial Artificial Neural Networks","authors":"J. C. Luna-Sanchez, E. Gómez-Ramírez, K. Najim, E. Ikonen","doi":"10.1109/IJCNN.2011.6033576","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033576","url":null,"abstract":"The adaptation made for the Polynomial Artificial Neural Networks (PANN) using not only integer exponentials but also fractional exponentials, have shown evidence of its better performance, especially, when it works with non-linear and chaotic time series. In this paper we show the comparison of the PANN improved model of fractional exponentials with a new logarithmic model. We show that this new model have even better performance than the last PANN improved model.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115231898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}