Lei Sun, Badong Chen, Jie Yang, Ronghua Zhou, Qing Nie, Aihua Wang
{"title":"A dictionary based survival error compensation for robust adaptive filtering","authors":"Lei Sun, Badong Chen, Jie Yang, Ronghua Zhou, Qing Nie, Aihua Wang","doi":"10.1109/IJCNN.2016.7727363","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727363","url":null,"abstract":"Survival information potential (SIP) is defined by the survival distribution function instead of the probability density function (PDF) of a random variable. SIP can be used as a risk function equipped with learning error compensation ability while this SIP based risk function does not involve the estimation of PDF. This is desirable for a robust learning application in view of the error compensation ability. The learning error compensation scheme provided by SIP requires rank information of learning errors. The accuracy of error compensation desires a large number of input data but is computationally expensive. It is shown that the error compensation can be approximated by an error-related distribution. Based on this approximation, a dictionary based error compensation scheme is proposed to obtain a fixed-budget recursive online learning method. This proposed method is compared with several well-known online learning methods including least-mean-square method, least absolute deviation method, affine projection algorithm, recursive least-mean-square method, and sliding window based SIP method. Simulation results validate the outstanding smooth and consistent convergence performance of the proposed method particularly in α-stable-noise environments.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114492621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Modal Local Receptive Field Extreme Learning Machine for object recognition","authors":"Fengxue Li, Huaping Liu, Xinying Xu, F. Sun","doi":"10.1109/IJCNN.2016.7727402","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727402","url":null,"abstract":"Learning rich representations efficiently plays an important role in multi-modal recognition task, which is crucial to achieve high generalization performance. To address this problem, in this paper, we propose an effective Multi-Modal Local Receptive Field Extreme Learning Machine (MM-ELM-LRF) structure, while maintaining ELM's advantages of training efficiency. In this structure, ELM-LRF is firstly conducted for feature extraction for each modality separately. And then, the shared layer is developed by combining these features from each modality. Finally, the Extreme Learning Machine (ELM) is used as supervised feature classifier for the final decision. Experimental validation on Washington RGB-D Object Dataset illustrates that the proposed multiple modality fusion method achieves better recognition performance.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"410 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ant colony optimization for the design of Modular Neural Networks in pattern recognition","authors":"F. Valdez, O. Castillo, P. Melin","doi":"10.1109/IJCNN.2016.7727194","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727194","url":null,"abstract":"We describe in this paper the architecture of a modular neural network (MNN) for pattern recognition. More recently, the study of modular neural network techniques theory has been receiving significant attention. The design of a recognition system also requires careful attention. The paper aims to use the Ant Colony paradigm to optimize the architecture of this Modular Neural Network for pattern recognition in order to obtain a good percentage of image identification and in the shortest time possible.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129323346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neuro-fuzzy learning of locust's marching in a Swarm","authors":"G. Segal, A. Moshaiov, Guy Amichay, A. Ayali","doi":"10.1109/IJCNN.2016.7727335","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727335","url":null,"abstract":"This study deals with the identification of the behavior of an individual in a group of marching locusts, as observed under laboratory conditions. In particular, the study focuses on the intermittent motion (walking initiation and pausing) of the locusts using Adaptive Neuro-Fuzzy Inference System (ANFIS). Several possible fuzzy rules were examined in a trial-and-error approach, before establishing a reliable set of rules. Analysis of this set led to a consequent reduced fuzzy controller. The results of this study serve as a first step towards achieving the long-term goal of understanding how the behavior of an individual locust translates to the collective swarm movement. As part of achieving this goal, we plan on building a locust-like robot and investigating its behavior within a living swarm of locusts. On a more general level, this study demonstrates, for the first time, that ANFIS can be used to support the understanding of biological systems by translating experimental data into meaningful control laws.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126890392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Meta-cognitive Regression Neural Network for function approximation: Application to Remaining Useful Life estimation","authors":"G. S. Babu, Xiaoli Li, S. Suresh","doi":"10.1109/IJCNN.2016.7727831","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727831","url":null,"abstract":"In this paper, we present a novel approach for Remaining Useful Life (RUL) estimation problem in prognostics using a proposed `sequential learning Meta-cognitive Regression Neural Network (McRNN) algorithm for function approximation'. The McRNN has two components, namely, a cognitive component and a meta-cognitive components. The cognitive component is an evolving single hidden layer Radial Basis Function (RBF) network with Gaussian activation functions. The meta-cognitive component present in McRNN helps to cognitive component in selecting proper samples to learn based on its current knowledge and evolve architecture automatically. The McRNN employs extended Kalman Filter (EKF) to find optimal network parameters in training. First, the performance of the proposed sequential learning McRNN algorithm has been evaluated using a set of benchmark function approximation problems and is compared with existing sequential learning algorithms. The performance results on these problems show the better performance of McRNN algorithm over the other algorithms. Next, the proposed McRNN algorithm has been applied to RUL estimation problem based on sensor data. For simulation studies, we have used Prognostics Health Management (PHM) 2008 Data Challenge data set and compared with the existing approaches based on state-of-the-art regression algorithms. The experimental results show that our proposed McRNN algorithm based approach can accurately estimate RUL of the system.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123820057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Ribeiro, Marcos Cesar Gritti, H. V. Ayala, V. Mariani, L. Coelho
{"title":"Short-term load forecasting using wavenet ensemble approaches","authors":"G. Ribeiro, Marcos Cesar Gritti, H. V. Ayala, V. Mariani, L. Coelho","doi":"10.1109/IJCNN.2016.7727272","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727272","url":null,"abstract":"Time series forecasting plays a key role in many areas of science, finance and engineering, mainly for the estimation of trend or seasonality of a variable under observation, aiming to serve as basis for future purchase decisions, choice of design parameters or maintenance schedule. Artificial Neural Networks (ANNs) have proven to be suitable in linear or nonlinear functions mapping. However, the ANNs, implemented in its most simplistic form, tend to have a loss in overall performance. This work aims to obtain a prediction model for a short-term load problem through the usage of wavenets ensemble, which is an ANN approach capable in combining the best characteristics of each ensemble component, in order to achieve a higher overall performance. We adopted the usage of bootstrapping, cross-validation and the inputs decimation approaches for the ensemble construction. For the components selection, `constructive' and `no selection' methods were applied. Finally, the combination is held though simple average, mode or stacked generalization. The results show that it is possible to improve the generalization ability through effective committees depending on the methods used to construct the ensemble. The total relative improvement achieved in respect to the naive model, was over 95%, regardless the number of sub wavenets, and for the best component, the relative improvement was 93.91% using five wavenets. We conclude that the most frequent and effective set, but not always with the lower MSE (Mean Squared Error), was using constructive bagging with simple average.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121414037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Margined Winner-Take-All: New learning rule for pattern recognition","authors":"K. Fukushima","doi":"10.1109/IJCNN.2016.7727304","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727304","url":null,"abstract":"The neocognitron is a deep (multi-layered) convolutional neural network that can be trained to recognize visual patterns robustly. In the intermediate layers of the neocognitron, local features are extracted from input patterns. In the deepest layer, based on the features extracted in the intermediate layers, input patterns are classified into classes. A method called IntVec (interpolating-vector) is used for this purpose. This paper proposes a new learning rule called margined Winner-Take-All (mWTA) for training the deepest layer. Every time when a training pattern is presented during the learning, if the result of recognition by WTA (Winner-Take-All) is an error, a new cell is generated in the deepest layer. Here we put a certain amount of margin to the WTA. In other words, only during the learning, a certain amount of handicap is given to cells of classes other than that of the training vector, and the winner is chosen under this handicap. By introducing the margin to the WTA, we can generate a compact set of cells, with which a high recognition rate can be obtained with a small computational cost. The ability of this mWTA is demonstrated by computer simulation.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114316961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In the quest of efficient hardware implementations of dynamic neural fields: An experimental study on the influence of the kernel shape","authors":"Benoît Chappet de Vangel, J. Fix","doi":"10.1109/IJCNN.2016.7727446","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727446","url":null,"abstract":"Dynamic neural field (DNF) is a popular mesoscopic model for cortical column interactions. It is widely studied analytically and successfully applied to physiological modelling, bioinspired computation and robotics. DNF behavior emerges from distributed and decentralized interactions between computing units which makes it an interesting candidate as a cellular building-block for unconventional computations. That is why we are studying the hardware implementation of DNF on digital substratum (eg. FPGA). As shown in previous papers, this implementation requires several modifications to the equations in order to obtain decent hardware surface utilisation and clock speed. Here we show that the modification of the lateral weights kernel function is possible as long as certain conditions, enumerated in Amari's seminal work are respected. Thank to metaheuristic optimisation it is possible to find the right parameters for two behavioral scenarii of bio-inspired computation interest. We show that the two most hardware-friendly kernels (difference of linear functions and piece-wise function) are as easy to tune as the traditional Mexican hat kernel. However the difference of exponential kernel is more difficult to tune.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116185573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional Neural Network based sentiment analysis using Adaboost combination","authors":"Yazhi Gao, Wenge Rong, Yikang Shen, Z. Xiong","doi":"10.1109/IJCNN.2016.7727352","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727352","url":null,"abstract":"Sentimental polarity detection has long been a hot task in natural language processing since its applications range from product feedback analysis to user statement understanding. Recently a lot of machine learning approaches have been proposed in the literature, e.g., SVM, Naive Bayes, recursive neural network, auto-encoders and etc. Among these different models, Convolutional Neural Network (CNN) architecture have also demonstrated profound efficiency in NLP tasks including sentiment classification. In CNN, the width of convolutional filter functions alike number N in N-grams model. Thus, different filter lengths may influence the performance of CNN classifier. In this paper, we want to study the possibility of leveraging the contribution of different filter lengths and grasp their potential in the final polarity of the sentence. We then use Adaboost to combine different classifiers with respective filter sizes. The experimental study on commonly used datasets has shown its potential in identifying the different roles of specific N-grams in a sentence respectively and merging their contribution in a weighted classifier.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114715759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A simplified swarm optimization for object tracking","authors":"Guang Liu, Yuk Ying Chung, W. Yeh","doi":"10.1109/IJCNN.2016.7727195","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727195","url":null,"abstract":"Moving object tracking in video sequences is an important task in the field of computer vision. In this paper, we propose a new population-based algorithm namely simplified swarm optimization (SSO) for tracking arbitrary objects. In SSO, the object model is first projected into a high-dimensional feature space, then the particles will fly over image pixels to find an optimal match of the target. While searching for the optimum, SSO progressively analyzes the occlusion situation. If any occlusion or disappearance of the target object is detected, the movement rules for the searching particles will be adaptively adjusted to recapture the target object. Experimental results showed that the SSO can robustly track an arbitrary target in various challenging conditions. Furthermore, SSO is capable to have 40% faster in speed and 36% higher in accuracy rate than the traditional PSO for varied environment.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"343 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114818559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}