The 2013 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
SOMMA: Cortically inspired paradigms for multimodal processing 多模态处理的皮质启发范式
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706959
Mathieu Lefort, Y. Boniface, B. Girau
{"title":"SOMMA: Cortically inspired paradigms for multimodal processing","authors":"Mathieu Lefort, Y. Boniface, B. Girau","doi":"10.1109/IJCNN.2013.6706959","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706959","url":null,"abstract":"SOMMA (Self Organizing Maps for Multimodal Association) consists on cortically inspired paradigms for multimodal data processing. SOMMA defines generic cortical maps - one for each modality - composed of 3-layers cortical columns. Each column learns a discrimination to a stimulus of the input flow with the BCMu learning rule [26]. These discriminations are self-organized in each map thanks to the coupling with neural fields used as a neighborhood function [25]. Learning and computation in each map is influenced by other modalities thanks to bidirectional topographic connections between all maps. This multimodal influence drives a joint self-organization of maps and multimodal perceptions of stimuli. This work takes place after the design of a self-organizing map [25] and of a modulation mechanism for influencing its self-organization [26] oriented towards a multimodal purpose. In this paper, we introduce a way to connect these self-organizing maps to obtain a multimap multimodal processing, completing our previous work. We also give an overview of the architectural and functional properties of the resulting paradigm SOMMA.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124787771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Overcoming the local-minimum problem in training multilayer perceptrons by gradual deconvexification 用逐步解麻烦的方法克服多层感知器训练中的局部最小问题
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706796
J. Lo, Yichuan Gui, Yun Peng
{"title":"Overcoming the local-minimum problem in training multilayer perceptrons by gradual deconvexification","authors":"J. Lo, Yichuan Gui, Yun Peng","doi":"10.1109/IJCNN.2013.6706796","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706796","url":null,"abstract":"A method of training neural networks using the risk-averting error (RAE) criterion Jλ (w), which was presented in IJCNN 2001, has the capability to avoid nonglobal local minima, but suffers from a severe limitation on the magnitude of the risk-sensitivity index λ. To eliminating the limitation, an improved method using the normalized RAE (NRAE) Cλ (w) was proposed in ISNN 2012, but it requires a selection of a proper λ, whose range may be dependent on the application. A new training method called the gradual deconvexification (GDC) is proposed in this paper. It starts with a very large λ and gradually decreases it in the training process until a global minimum of Cλ (w) or a good generalization capability is achieved. GDC training method was tested on a large number of numerical examples and produced a very good result in each test.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125006965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Impact of variability in data on accuracy and diversity of neural network based ensemble classifiers 数据变异性对基于神经网络的集成分类器的准确性和多样性的影响
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706986
Chien-Yuan Chiu, B. Verma, Michael M. Li
{"title":"Impact of variability in data on accuracy and diversity of neural network based ensemble classifiers","authors":"Chien-Yuan Chiu, B. Verma, Michael M. Li","doi":"10.1109/IJCNN.2013.6706986","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706986","url":null,"abstract":"Ensemble classifiers are very useful tools which can be applied for classification and prediction tasks in many real-world applications. There are many popular ensemble classifier generation techniques including neural network based techniques. However, there are many problems with ensemble classifiers when we apply them to real-world data of different size. This paper presents and investigates an approach for finding the impact of various parameters such as attributes, instances, classes on clusters, accuracy and diversity. The primary aim of this research is to see whether there is any link between these parameters and accuracy and diversity. The secondary aim is to see whether we can find any relationship between number of clusters in ensemble classifier and data variables. A series of experiments has been conducted by using different size of UCI machine learning benchmark datasets and neural network ensemble classifiers.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129638189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Active testing for SVM parameter selection 支持向量机参数选择的主动测试
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706910
P. Miranda, R. Prudêncio
{"title":"Active testing for SVM parameter selection","authors":"P. Miranda, R. Prudêncio","doi":"10.1109/IJCNN.2013.6706910","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706910","url":null,"abstract":"The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. It has been shown that meta-learning can be used to support the selection of SVM parameters. However, it is very dependent on the quality of the dataset and the meta-features used to characterize the dataset. As alternative for this problem, a recent technique called Active Testing characterized a dataset based on the pairwise performance differences between possible solutions. This approach selects the most useful cross-validation tests. Each new cross-validation test will contribute information to a better estimate of dataset similarity, and thus better predict which algorithms are most promising on the new dataset. In this paper we propose the application of Active Testing for the SVM parameter problem. We test it on the problem of setting the RBF kernel parameters for classification problems and we compare its similarity strategy with based on data characteristics. The results showed the variants of Active Testing that rely on cross-validation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127168336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Geometrical facial modeling for emotion recognition 面向情感识别的几何面部建模
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6707085
Giampaolo L. Libralon, R. Romero
{"title":"Geometrical facial modeling for emotion recognition","authors":"Giampaolo L. Libralon, R. Romero","doi":"10.1109/IJCNN.2013.6707085","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707085","url":null,"abstract":"Facial expressions are the facial changes in response to a person's internal emotional states, intentions, or social communications. Facial expression analysis has been an active research topic for behavioral scientists since the work of Darwin in 1872. It includes both measurement of facial motion and recognition of expression. There are two different ways to analyze facial expressions: one considers facial affect (emotion) and the other facial muscular movements. Many researchers argue that there is a set of basic emotions which were preserved during evolutive process because they allow the adaption of the organisms behavior to distinct daily situations. This paper discusses emotion recognition based on analysis of facial elements. Different feature sets are proposed to represent the characteristics of the human face and their performance is evaluated using Machine Learning techniques. The results indicate that the selected facial features represent a valid approach for emotion identification.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127319874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The race to new mathematics of brains and consciousness — A tribute to John G. Taylor 大脑和意识的新数学竞赛——致敬约翰·g·泰勒
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706712
R. Kozma
{"title":"The race to new mathematics of brains and consciousness — A tribute to John G. Taylor","authors":"R. Kozma","doi":"10.1109/IJCNN.2013.6706712","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706712","url":null,"abstract":"This contribution presents a review of mathematical approaches to modeling brains and higher cognitive activity, including consciousness. We dedicate this paper to John Gerard Taylor on the somber occasion of remembering his lifelong contribution to science, with a focus on his pioneering work on neural networks and brain studies.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131039281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Collaborative multi-view clustering 协同多视图聚类
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6707037
Mohamad Ghassany, Nistor Grozavu, Younès Bennani
{"title":"Collaborative multi-view clustering","authors":"Mohamad Ghassany, Nistor Grozavu, Younès Bennani","doi":"10.1109/IJCNN.2013.6707037","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707037","url":null,"abstract":"The purpose of this article is to introduce a new collaborative multi-view clustering approach based on a probabilistic model. The aim of collaborative clustering is to reveal the common underlying structure of data spread across multiple data sites by applying clustering techniques. The strength of the collaboration between each pair of data repositories is determined by a fixed parameter. Previous works considered deterministic techniques such as Fuzzy C-Means (FCM) and Self-Organizing Maps (SOM). In this paper, we present a new approach for the collaborative clustering using a generative model, which is the Generative Topographic Mappings (GTM). Maps representing different sites could collaborate without recourse to the original data, preserving their privacy. We present the approach for multi-view collaboration using GTM, where data sets have the same observations but presented in different feature space; i.e. different dimensions. The proposed approach has been validated on several data sets, and experimental results have shown very promising performance.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130858490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Real-time discrete neural identifier for a linear induction motor using a dSPACE DS1104 board 使用dSPACE DS1104板的线性感应电动机的实时离散神经辨识器
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6707109
Jorge D. Rios, A. Alanis, J. Rivera, M. Hernández-González
{"title":"Real-time discrete neural identifier for a linear induction motor using a dSPACE DS1104 board","authors":"Jorge D. Rios, A. Alanis, J. Rivera, M. Hernández-González","doi":"10.1109/IJCNN.2013.6707109","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707109","url":null,"abstract":"This paper presents a real-time discrete nonlinear neural identifier for a Linear Induction Motor (LIM). This identifier is based on a discrete-time recurrent high order neural network (RHONN) trained on-line with an extended Kalman filter (EKF)-based algorithm. A reduced order observer is used to estimate the secondary fluxes. The real-time implementation of the neural identifier is implemented by using dSPACE DS1104 controller board on MATLAB/Simulink with dSPACE RTI library and its performance is shown by graphs.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130292938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
About analysis and robust classification of searchlight fMRI-data using machine learning classifiers 基于机器学习分类器的探照灯fmri数据分析与鲁棒分类研究
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706990
M. Lange, M. Kaden, T. Villmann
{"title":"About analysis and robust classification of searchlight fMRI-data using machine learning classifiers","authors":"M. Lange, M. Kaden, T. Villmann","doi":"10.1109/IJCNN.2013.6706990","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706990","url":null,"abstract":"In the present paper we investigate the analysis of functional magnetic resonance image (fMRI) data based on voxel response analysis. All voxels in local spatial area (volume) of a considered voxel form its so-called searchlight. The searchlight for a presented task is taken as a complex pattern. Task dependent discriminant analysis of voxel is then performed by assessment of the discrimination behavior of the respective searchlight pattern for a given task. Classification analysis of these patterns is usually done using linear support vector machines (linSVMs) as a machine learning approach or another statistical classifier like linear discriminant classifier. The test classification accuracy determining the task sensitivity is interpreted as the discrimination ability of the related voxel. However, frequently, the number of voxels contributing to a searchlight is much larger than the number of available pattern samples in classification learning, i.e. the dimensionality of patterns is higher than the number of samples. Therefore, the respective underlying mathematical classification problem has not an unique solution such that a certain solution obtained by the machine learning classifier contains arbitrary (random) components. For this situation, the generalization ability of the classifier may drop down. We propose in this paper another data processing approach to reduce this problem. In particular, we reformulate the classification problem within the searchlight. Doing so, we avoid the dimensionality problem: We obtain a mathematically well-defined classification problem, such that generalization ability of a trained classifier is kept high. Hence, a better stability of the task discrimination is obtained. Additionally, we propose the utilization of generalized learning vector quantizers as an alternative machine learning classifier system compared to SVMs, to improve further the stability of the classifier model due to decreased model complexity.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129214407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive control of discrete-time nonlinear systems by recurrent neural networks in a Quasi Sliding mode regime 基于递归神经网络的离散非线性系统准滑模自适应控制
The 2013 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2013-08-01 DOI: 10.1109/IJCNN.2013.6706995
I. Salgado, O. C. Nieto, I. Chairez, C. Yáñez-Márquez
{"title":"Adaptive control of discrete-time nonlinear systems by recurrent neural networks in a Quasi Sliding mode regime","authors":"I. Salgado, O. C. Nieto, I. Chairez, C. Yáñez-Márquez","doi":"10.1109/IJCNN.2013.6706995","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706995","url":null,"abstract":"The control problem of nonlinear systems affected by external perturbations and parametric uncertainties has attracted the attention for many researches. Artificial Neural Networks (ANN) constitutes an option for systems whose mathematical description is uncertain or partially unknown. In this paper, a Recurrent Neural Network (RNN) is designed to address the problems of identification and control of discrete-time nonlinear systems given by a gray box. The learning laws for the RNN are designed in terms of discrete-time Lyapunov stability. The control input is developed fulfilling the existence condition to establish a Quasi Sliding Regime. In means of Lyapunov stability, the identification and tracking errors are ultimately bounded in a neighborhood around zero. Numerical examples are presented to show the behavior of the RNN in the identification and control processes of a highly nonlinear discrete-time system, a Lorentz chaotic oscillator.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126409598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信