The 2011 International Joint Conference on Neural Networks最新文献

筛选
英文 中文
Reinforcement active learning hierarchical loops 强化主动学习分层循环
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033617
Goren Gordon, E. Ahissar
{"title":"Reinforcement active learning hierarchical loops","authors":"Goren Gordon, E. Ahissar","doi":"10.1109/IJCNN.2011.6033617","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033617","url":null,"abstract":"A curious agent, be it a robot, animal or human, acts so as to learn as much as possible about itself and its environment. Such an agent can also learn without external supervision, but rather actively probe its surrounding and autonomously induce the relations between its action's effects on the environment and the resulting sensory input. We present a model of hierarchical motor-sensory loops for such an autonomous active learning agent, meaning a model that selects the appropriate action in order to optimize the agent's learning. Furthermore, learning one motor-sensory mapping enables the learning of other mappings, thus increasing the extent and diversity of knowledge and skills, usually in hierarchical manner. Each such loop attempts to optimally learn a specific correlation between the agent's available internal information, e.g. sensory signals and motor efference copies, by finding the action that optimizes that learning. We demonstrate this architecture on the well-studied vibrissae system, and show how sensory-motor loops are actively learnt from the bottom-up, starting with the forward and inverse models of whisker motion and then extending them to object localization. The model predicts transition from free-air whisking that optimally learns the self-generated motor-sensory mapping to touch-induced palpation that optimizes object localization, both observed in naturally behaving rats.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114798272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Lag selection for time series forecasting using Particle Swarm Optimization 基于粒子群算法的时间序列预测滞后选择
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033535
Gustavo H. T. Ribeiro, P. S. D. M. Neto, George D. C. Cavalcanti, Ing Ren Tsang
{"title":"Lag selection for time series forecasting using Particle Swarm Optimization","authors":"Gustavo H. T. Ribeiro, P. S. D. M. Neto, George D. C. Cavalcanti, Ing Ren Tsang","doi":"10.1109/IJCNN.2011.6033535","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033535","url":null,"abstract":"The time series forecasting is an useful application for many areas of knowledge such as biology, economics, climatology, biology, among others. A very important step for time series prediction is the correct selection of the past observations (lags). This paper uses a new algorithm based in swarm of particles to feature selection on time series, the algorithm used was Frankenstein's Particle Swarm Optimization (FPSO). Many forms of filters and wrappers were proposed to feature selection, but these approaches have their limitations in relation to properties of the data set, such as size and whether they are linear or not. Optimization algorithms, such as FPSO, make no assumption about the data and converge faster. Hence, the FPSO may to find a good set of lags for time series forecasting and produce most accurate forecastings. Two prediction models were used: Multilayer Perceptron neural network (MLP) and Support Vector Regression (SVR). The results show that the approach improved previous results and that the forecasting using SVR produced best results, moreover its showed that the feature selection with FPSO was better than the features selection with original Particle Swarm Optimization.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126478815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Solving Traveling Salesman Problem by a hybrid combination of PSO and Extremal Optimization 用粒子群优化与极值优化的混合组合求解旅行商问题
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033402
Saeed Khakmardan, H. Poostchi, M. Akbarzadeh-T.
{"title":"Solving Traveling Salesman Problem by a hybrid combination of PSO and Extremal Optimization","authors":"Saeed Khakmardan, H. Poostchi, M. Akbarzadeh-T.","doi":"10.1109/IJCNN.2011.6033402","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033402","url":null,"abstract":"Particle Swarm Optimization (PSO) has received great attention in recent years as a successful global search algorithm, due to its simple implementation and inexpensive computation overhead. However, PSO still suffers from the problem of early convergence to locally optimal solutions. Extremal Optimization (EO) is a local search algorithm that has been able to solve NP hard optimization problems. The combination of PSO with EO benefits from the exploration ability of PSO and the exploitation ability of EO, and reduces the probability of early trapping in the local optima. In other words, due to the EO's strong local search capability, the PSO focuses on its global search by a new mutation operator that prevents loss of variety among the particles. This is done when the particle's parameters exceed the problem conditions. The resulting hybrid algorithm Mutated PSO-EO (MPSO-EO) is then applied to the Traveling Salesman Problem (TSP) as a NP hard multimodal optimization problem. The performance of the proposed approach is compared with several other metaheuristic methods on 3 well known TSP databases and 10 unimodal and multimodal benchmark functions.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast AdaBoost training using weighted novelty selection 快速AdaBoost训练使用加权新颖性选择
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033366
Mojtaba Seyedhosseini, António R. C. Paiva, T. Tasdizen
{"title":"Fast AdaBoost training using weighted novelty selection","authors":"Mojtaba Seyedhosseini, António R. C. Paiva, T. Tasdizen","doi":"10.1109/IJCNN.2011.6033366","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033366","url":null,"abstract":"In this paper, a new AdaBoost learning framework, called WNS-AdaBoost, is proposed for training discriminative models. The proposed approach significantly speeds up the learning process of adaptive boosting (AdaBoost) by reducing the number of data points. For this purpose, we introduce the weighted novelty selection (WNS) sampling strategy and combine it with AdaBoost to obtain an efficient and fast learning algorithm. WNS selects a representative subset of data thereby reducing the number of data points onto which AdaBoost is applied. In addition, WNS associates a weight with each selected data point such that the weighted subset approximates the distribution of all the training data. This ensures that AdaBoost can trained efficiently and with minimal loss of accuracy. The performance of WNS-AdaBoost is first demonstrated in a classification task. Then, WNS is employed in a probabilistic boosting-tree (PBT) structure for image segmentation. Results in these two applications show that the training time using WNS-AdaBoost is greatly reduced at the cost of only a few percent in accuracy.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115921286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Modularity adaptation in cooperative coevolution of feedforward neural networks 前馈神经网络协同进化中的模块化适应
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033287
Rohitash Chandra, Marcus Frean, Mengjie Zhang
{"title":"Modularity adaptation in cooperative coevolution of feedforward neural networks","authors":"Rohitash Chandra, Marcus Frean, Mengjie Zhang","doi":"10.1109/IJCNN.2011.6033287","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033287","url":null,"abstract":"In this paper, an adaptive modularity cooperative coevolutionary framework is presented for training feedforward neural networks. The modularity adaptation framework is composed of different neural network encoding schemes which transform from one level to another based on the network error. The proposed framework is compared with canonical cooperative coevolutionary methods. The results show that the proposal outperforms its counterparts in terms of training time, success rate and scalability.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132036063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A reversibility analysis of encoding methods for spiking neural networks 尖峰神经网络编码方法的可逆性分析
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033443
Cameron Johnson, Sinchan Roychowdhury, G. Venayagamoorthy
{"title":"A reversibility analysis of encoding methods for spiking neural networks","authors":"Cameron Johnson, Sinchan Roychowdhury, G. Venayagamoorthy","doi":"10.1109/IJCNN.2011.6033443","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033443","url":null,"abstract":"There is much excitement surrounding the idea of using spiking neural networks (SNNs) as the next generation of function-approximating neural networks. However, with the unique mechanism of communication (neural spikes) between neurons comes the challenge of transferring real-world data into the network to process. Many different encoding methods have been developed for SNNs, most temporal and some spatial. This paper analyzes three of them (Poisson rate encoding, Gaussian receptor fields, and a dual-neuron n-bit representation) and tests to see if the information is fully transformed into the spiking patterns. An oft-neglected consideration in encoding for SNNs is whether or not the real-world data is even truly being introduced to the network. By testing the reversibility of the encoding methods in this paper, the completeness of the information's presence in the pattern of spikes to serve as an input to an SNN is determined.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132319472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Generation of composed musical structures through recurrent neural networks based on chaotic inspiration 基于混沌灵感的递归神经网络合成音乐结构
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033648
Andres Eduardo Coca Salazar, R. Romero, Liang Zhao
{"title":"Generation of composed musical structures through recurrent neural networks based on chaotic inspiration","authors":"Andres Eduardo Coca Salazar, R. Romero, Liang Zhao","doi":"10.1109/IJCNN.2011.6033648","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033648","url":null,"abstract":"In this work, an Elman recurrent neural network is used for automatic musical structure composition based on the style of a music previously learned during the training phase. Furthermore, a small fragment of a chaotic melody is added to the input layer of the neural network as an inspiration source to attain a greater variability of melodies. The neural network is trained by using the BPTT (back propagation through time) algorithm. Some melody measures are also presented for characterizing the melodies provided by the neural network and for analyzing the effect obtained by the insertion of chaotic inspiration in relation to the original melody characteristics. Specifically, a similarity melodic measure is considered for contrasting the variability obtained between the learned melody and each one of the composite melodies by using different quantities of inspiration musical notes.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130055608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Models of clifford recurrent neural networks and their dynamics clifford递归神经网络模型及其动力学
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033336
Y. Kuroe
{"title":"Models of clifford recurrent neural networks and their dynamics","authors":"Y. Kuroe","doi":"10.1109/IJCNN.2011.6033336","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033336","url":null,"abstract":"Recently, models of neural networks in the real domain have been extended into the high dimensional domain such as the complex and quaternion domain, and several high-dimensional models have been proposed. These extensions are generalized by introducing Clifford algebra (geometric algebra). In this paper we extend conventional real-valued models of recurrent neural networks into the domain defined by Clifford algebra and discuss their dynamics. Since geometric product is non-commutative, some different models can be considered. We propose three models of fully connected recurrent neural networks, which are extensions of the real-valued Hopfield type neural networks to the domain defined by Clifford algebra. We also study dynamics of the proposed models from the point view of existence conditions of an energy function. We discuss existence conditions of an energy function for two classes of the Hopfield type Clifford neural networks.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130102131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
A batch self-organizing maps algorithm based on adaptive distances 一种基于自适应距离的批量自组织映射算法
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033515
L. Pacífico, F. D. Carvalho
{"title":"A batch self-organizing maps algorithm based on adaptive distances","authors":"L. Pacífico, F. D. Carvalho","doi":"10.1109/IJCNN.2011.6033515","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033515","url":null,"abstract":"Clustering methods aims to organize a set of items into clusters such that items within a given cluster have a high degree of similarity, while items belonging to different clusters have a high degree of dissimilarity. The self-organizing map (SOM) introduced by Kohonen is an unsupervised competitive learning neural network method which has both clustering and visualization properties, using a neighborhood lateral interaction function to discover the topological structure hidden in the data set. In this paper, we introduce a batch self-organizing map algorithm based on adaptive distances. Experimental results obtained in real benchmark datasets show the effectiveness of our approach in comparison with traditional batch self-organizing map algorithms.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133942064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visually-guided adaptive robot (ViGuAR) 视觉引导自适应机器人(vigar)
The 2011 International Joint Conference on Neural Networks Pub Date : 2011-10-03 DOI: 10.1109/IJCNN.2011.6033608
Gennady Livitz, Heather Ames, Ben Chandler, A. Gorchetchnikov, Jasmin Léveillé, Zlatko Vasilkoski, Massimiliano Versace, E. Mingolla, G. Snider, R. Amerson, Dick Carter, H. Abdalla, M. Qureshi
{"title":"Visually-guided adaptive robot (ViGuAR)","authors":"Gennady Livitz, Heather Ames, Ben Chandler, A. Gorchetchnikov, Jasmin Léveillé, Zlatko Vasilkoski, Massimiliano Versace, E. Mingolla, G. Snider, R. Amerson, Dick Carter, H. Abdalla, M. Qureshi","doi":"10.1109/IJCNN.2011.6033608","DOIUrl":"https://doi.org/10.1109/IJCNN.2011.6033608","url":null,"abstract":"A neural modeling platform known as Cog ex Machina1 (Cog) developed in the context of the DARPA SyNAPSE2 program offers a computational environment that promises, in a foreseeable future, the creation of adaptive whole-brain systems subserving complex behavioral functions in virtual and robotic agents. Cog is designed to operate on low-powered, extremely storage-dense memristive hardware3 that would support massively-parallel, scalable computations. We report an adaptive robotic agent, ViGuAR4, that we developed as a neural model implemented on the Cog platform. The neuromorphic architecture of the ViGuAR brain is designed to support visually-guided navigation and learning, which in combination with the path-planning, memory-driven navigation agent - MoNETA5 - also developed at the Neuromorphics Lab at Boston University, should effectively account for a wide range of key features in rodents' navigational behavior.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131546377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信