{"title":"Interval data clustering using self-organizing maps based on adaptive Mahalanobis distances","authors":"Chantal Hajjar, H. Hamdan","doi":"10.1109/IJCNN.2013.6706852","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706852","url":null,"abstract":"The self-organizing map is a kind of artificial neural network used to map high dimensional data into a low dimensional space. This paper presents a self-organizing map for interval-valued data based on adaptive Mahalanobis distances in order to do clustering of interval data with topology preservation. Two methods based on the batch training algorithm for the self-organizing maps are proposed. The first method uses a common Mahalanobis distance for all clusters. In the second method, the algorithm starts with a common Mahalanobis distance per cluster and then switches to use a different distance per cluster. This process allows a more adapted clustering for the given data set. The performances of the proposed methods are compared and discussed.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"318 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124501866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neurodynamic optimization approaches to robust pole assignment based on alternative robustness measures","authors":"Xinyi Le, Jun Wang","doi":"10.1109/IJCNN.2013.6706834","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706834","url":null,"abstract":"This paper presents new results on neurodynamic optimization approaches to robust pole assignment based on four alternative robustness measures. One or two recurrent neural networks are utilized to optimize these measures while making exact pole assignment. Compared with existing approaches, the present neurodynamic approaches can result in optimal robustness in most cases with one of the robustness measures. Simulation results of the proposed approaches for many benchmark problems are reported to demonstrate their performances.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124529491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The convergence rate of linearly separable SMO","authors":"J. Lázaro, José R. Dorronsoro","doi":"10.1109/IJCNN.2013.6707034","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707034","url":null,"abstract":"It is well known that the dual function value sequence generated by SMO has a linear convergence rate when the kernel matrix is positive definite and sublinear convergence is also known to hold for a general matrix. In this paper we will prove that, when applied to hard-margin, i.e., linearly separable SVM problems, a linear convergence rate holds for the SMO algorithm without any condition on the kernel matrix. Moreover, we will also show linear convergence for the multiplier sequence generated by SMO, the corresponding weight vectors and the KKT gap usually applied to control the number of SMO iterations. This gives a fairly complete picture of the convergence of the various sequences SMO generates. While linear SMO convergence for the general SVM L1 soft margin problem is still open, the approach followed here may lead to such a general result.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114439962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vigneshwaran Senthilvel, B. S. Mahanand, S. Sundaram, R. Savitha
{"title":"Autism spectrum disorder detection using projection based learning meta-cognitive RBF network","authors":"Vigneshwaran Senthilvel, B. S. Mahanand, S. Sundaram, R. Savitha","doi":"10.1109/IJCNN.2013.6706777","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706777","url":null,"abstract":"In this paper, we present an approach for the diagnosis of Autism Spectrum Disorder (ASD) from Magnetic Resonance Imaging (MRI) scans with Voxel-Based Morphometry (VBM) detected features using Projection Based Learning (PBL) algorithm for a Meta-cognitive Radial Basis Function Network (McRBFN) classifier. McRBFN emulates human-like meta-cognitive learning principles. As each sample is presented to the network, the McRBFN uses the estimated class label, the maximum hinge error and class-wise significance to address the self-regulating principles of what-to-learn, when-to-learn and how-to-learn in a meta-cognitive framework. Initially, McRBFN begins with zero hidden neurons and adds required number of neurons to approximate the decision surface. When a neuron is added, its parameters are initialized based on the sample overlapping conditions. The output weights are updated using a PBL algorithm such that the network finds the minimum point of an energy function defined by the hinge-loss error. Moreover, as samples with similar information are deleted, over-training is avoided. The PBL algorithm helps to reduce the computational effort used in training. For simulation studies, we have used MR images from the Autism Brain Imaging Data Exchange (ABIDE) data set. The performance of the PBL-McRBFN classifier is evaluated on complete morphometric features set obtained from the VBM analysis. The performance evaluation study clearly indicates the superior performance of PBL-McRBFN classifier over other classification algorithms.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114586738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classificability-regulated self-organizing map using restricted RBF","authors":"P. Hartono, T. Trappenberg","doi":"10.1109/IJCNN.2013.6706732","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706732","url":null,"abstract":"In this paper, we propose a hierarchical neural network similar to the Radial Basis Function (RBF) Network. The proposed Restricted RBF (rRBF) executes a neighborhood-restricted activation function for its hidden neurons and consequently generates a unique topological map, which differs from the conventional Self-Organizing Map, in its internal layer. The primary objective of this study is to visualize and study the emergence of order in the structure and investigate the relation between the order and the learning performance of a hierarchical neural network.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117193431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exogenous control and dynamical reduction of echo state networks","authors":"Patrick Stinson, Keith A. Bush","doi":"10.1109/IJCNN.2013.6706898","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706898","url":null,"abstract":"In this paper, we demonstrate that a Q-Learning control policy with a Growing Neural Gas state space approximation is sufficient to control echo state neural networks of arbitrary dynamical complexity in a discrete time model, given sufficient input gain. We control through a single input unit fully connected to an echo state reservoir; our influence of the system is constrained to the input only - no weights are modified after the network is initialized. Our methodology is successful for both temporal and spatial control goals. However, control of increasingly complex systems requires increasing saturation of units' activation function non-linearities, which we achieve by increasing the input gain. We find that when subjected to the minimal gain needed for control goals, systems of varying levels of dynamical complexity are reduced to very similar levels. However, even in such reduced circumstances, our control framework is still advantageous or necessary to achieve performance above chance levels.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117258627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Grushin, Derek Monner, J. Reggia, A. Mishra
{"title":"Robust human action recognition via long short-term memory","authors":"Alexander Grushin, Derek Monner, J. Reggia, A. Mishra","doi":"10.1109/IJCNN.2013.6706797","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706797","url":null,"abstract":"The long short-term memory (LSTM) neural network utilizes specialized modulation mechanisms to store information for extended periods of time. It is thus potentially well-suited for complex visual processing, where the current video frame must be considered in the context of past frames. Recent studies have indeed shown that LSTM can effectively recognize and classify human actions (e.g., running, hand waving) in video data; however, these results were achieved under somewhat restricted settings. In this effort, we seek to demonstrate that LSTM's performance remains robust even as experimental conditions deteriorate. Specifically, we show that classification accuracy exhibits graceful degradation when the LSTM network is faced with (a) lower quantities of available training data, (b) tighter deadlines for decision making (i.e., shorter available input data sequences) and (c) poorer video quality (resulting from noise, dropped frames or reduced resolution). We also clearly demonstrate the benefits of memory for video processing, particularly, under high noise or frame drop rates. Our study is thus an initial step towards demonstrating LSTM's potential for robust action recognition in real-world scenarios.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116276676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feng Ji, Hai Helen Li, B. Wysocki, C. Thiem, N. McDonald
{"title":"Memristor-based synapse design and a case study in reconfigurable systems","authors":"Feng Ji, Hai Helen Li, B. Wysocki, C. Thiem, N. McDonald","doi":"10.1109/IJCNN.2013.6706776","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706776","url":null,"abstract":"Scientists have dreamed of an information system with cognitive human-like skills for years. However, constrained by the device characteristics and rapidly increasing design complexity under the traditional processing technology, little progress has been made in hardware implementation. The recently popularized memristor offers a potential breakthrough for neuromorphic computing because of its unique properties including nonvolatily, extremely high fabrication density, and sensitivity to historic voltage/current behavior. In this work, we first investigate the memristor-based synapse design and the corresponding training scheme. Then, a case study of an 8-bit arithmetic logic unit (ALU) design is used to demonstrate the hardware implementation of reconfigurable system built based on memristor synapses.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"692 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123682698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A spiking thalamus model for form and motion processing of images","authors":"Suhas E. Chelian, N. Srinivasa","doi":"10.1109/IJCNN.2013.6706790","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706790","url":null,"abstract":"The thalamus, far from being a simple relay, supports several functions including attention and awareness. Recent spiking models of the thalamus tend to focus on abstract thalamocortical features such as rhythms and synchrony. Here a new spiking retino-thalamic model is presented that reproduces several aspects in visual processing including distinct form and motion processing pathways. Using test and natural image sequences, differences between parvocellular and magnocellular relay neurons are studied. In line with several experimental results, parvocellular neurons are found to be more sensitive to changes in color (necessary for form processing) than temporal frequency (necessary for motion processing) and conversely for magnocellular neurons. This model can in turn be used as input into subsequent cortical models or as a tool to aid in experimentation. Future extensions could include modeling brainstem or cortical influence on thalamic processing, as well as the control of virtual agents.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125944636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kernel-based distance metric learning in the output space","authors":"Cong Li, M. Georgiopoulos, G. Anagnostopoulos","doi":"10.1109/IJCNN.2013.6706862","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706862","url":null,"abstract":"In this paper we present two related, kernel-based Distance Metric Learning (DML) methods. Their respective models non-linearly map data from their original space to an output space, and subsequent distance measurements are performed in the output space via a Mahalanobis metric. The dimensionality of the output space can be directly controlled to facilitate the learning of a low-rank metric. Both methods allow for simultaneous inference of the associated metric and the mapping to the output space, which can be used to visualize the data, when the output space is 2-or 3-dimensional. Experimental results for a collection of classification tasks illustrate the advantages of the proposed methods over other traditional and kernel-based DML approaches.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124696766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}