2016 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
A comparison of action selection methods for implicit policy method reinforcement learning in continuous action-space 连续动作空间中隐式策略方法强化学习的动作选择方法比较
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-11-03 DOI: 10.1109/IJCNN.2016.7727688
Barry D. Nichols
{"title":"A comparison of action selection methods for implicit policy method reinforcement learning in continuous action-space","authors":"Barry D. Nichols","doi":"10.1109/IJCNN.2016.7727688","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727688","url":null,"abstract":"In this paper I investigate methods of applying reinforcement learning to continuous state- and action-space problems without a policy function. I compare the performance of four methods, one of which is the discretisation of the action-space, and the other three are optimisation techniques applied to finding the greedy action without discretisation. The optimisation methods I apply are gradient descent, Nelder-Mead and Newton's Method. The action selection methods are applied in conjunction with the SARSA algorithm, with a multilayer perceptron utilized for the approximation of the value function. The approaches are applied to two simulated continuous state- and action-space control problems: Cart-Pole and double Cart-Pole. The results are compared both in terms of action selection time and the number of trials required to train on the benchmark problems.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116863286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An experimental evaluation of echo state network for colour image segmentation 回声状态网络在彩色图像分割中的实验评价
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-11-03 DOI: 10.1109/IJCNN.2016.7727326
Abdelkerim Souahlia, A. Belatreche, A. Benyettou, K. Curran
{"title":"An experimental evaluation of echo state network for colour image segmentation","authors":"Abdelkerim Souahlia, A. Belatreche, A. Benyettou, K. Curran","doi":"10.1109/IJCNN.2016.7727326","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727326","url":null,"abstract":"Image segmentation refers to the process of dividing an image into multiple regions which represent meaningful areas. Image segmentation is an essential step for most image analysis tasks such as object recognition and tracking, pattern recognition, content-based image retrieval, etc. In recent years, a large number of image segmentation algorithms have been developed, but achieving accurate segmentation still remains a challenging task. Recently, reservoir computing (RC) has drawn much attention in machine learning as a new model of recurrent neural networks (RNN). Echo State Network (ESN) represents one efficient realization of RC, which is initially designed to facilitate learning in Recurrent Neural Networks. In this paper we investigate the viability of ESN as feature extractor for pixel classification based colour image segmentation. Extensive experiments are conducted on real world colour image datasets and the global ESN reservoir parameters are varied to identify their operating ranges that allow the use of the reservoir nodes internal activations as new pixel features for the colour image segmentation task. A simple feed forward neural network is used to realize the ESN readout function and classify these new features. The experimental results show that the proposed method achieves high performance image segmentation comparing with state-of-the-art techniques. In addition, a set of empirically derived guidelines for setting the reservoir global parameters are proposed.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"121 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129408520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Diverse, noisy and parallel: a New Spiking Neural Network approach for humanoid robot control 多元、噪声和并行:一种新的脉冲神经网络方法用于仿人机器人控制
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-11-03 DOI: 10.1109/IJCNN.2016.7727325
Ricardo de Azambuja, A. Cangelosi, S. Adams
{"title":"Diverse, noisy and parallel: a New Spiking Neural Network approach for humanoid robot control","authors":"Ricardo de Azambuja, A. Cangelosi, S. Adams","doi":"10.1109/IJCNN.2016.7727325","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727325","url":null,"abstract":"How exactly our brain works is still an open question, but one thing seems to be clear: biological neural systems are computationally powerful, robust and noisy. Using the Reservoir Computing paradigm based on Spiking Neural Networks, also known as Liquid State Machines, we present results from a novel approach where diverse and noisy parallel reservoirs, totalling 3,000 modelled neurons, work together receiving the same averaged feedback. Inspired by the ideas of action learning and embodiment we use the safe and flexible industrial robot BAXTER in our experiments. The robot was taught to draw three different 2D shapes on top of a desk using a total of four joints. Together with the parallel approach, the same basic system was implemented in a serial way to compare it with our new method. The results show our parallel approach enables BAXTER to produce the trajectories to draw the learned shapes more accurately than the traditional serial one.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133363597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Non-parametric Hidden Conditional Random Fields for action classification 用于动作分类的非参数隐藏条件随机场
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-11-03 DOI: 10.1109/IJCNN.2016.7727615
Natraj Raman, S. Maybank
{"title":"Non-parametric Hidden Conditional Random Fields for action classification","authors":"Natraj Raman, S. Maybank","doi":"10.1109/IJCNN.2016.7727615","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727615","url":null,"abstract":"Conditional Random Fields (CRF), a structured prediction method, combines probabilistic graphical models and discriminative classification techniques in order to predict class labels in sequence recognition problems. Its extension the Hidden Conditional Random Fields (HCRF) uses hidden state variables in order to capture intermediate structures. The number of hidden states in an HCRF must be specified a priori. This number is often not known in advance. A non-parametric extension to the HCRF, with the number of hidden states automatically inferred from data, is proposed here. This is a significant advantage over the classical HCRF since it avoids ad hoc model selection procedures. Further, the training and inference procedure is fully Bayesian eliminating the over fitting problem associated with frequentist methods. In particular, our construction is based on scale mixtures of Gaussians as priors over the HCRF parameters and makes use of Hierarchical Dirichlet Process (HDP) and Laplace distribution. The proposed inference procedure uses elliptical slice sampling, a Markov Chain Monte Carlo (MCMC) method, in order to sample optimal and sparse posterior HCRF parameters. The above technique is applied for classifying human actions that occur in depth image sequences - a challenging computer vision problem. Experiments with real world video datasets confirm the efficacy of our classification approach.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124485096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Learning perceptual texture similarity and relative attributes from computational features 从计算特征中学习感知纹理相似度和相关属性
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-11-03 DOI: 10.1109/IJCNN.2016.7727516
Jianwen Lou, Lin Qi, Junyu Dong, Hui Yu, G. Zhong
{"title":"Learning perceptual texture similarity and relative attributes from computational features","authors":"Jianwen Lou, Lin Qi, Junyu Dong, Hui Yu, G. Zhong","doi":"10.1109/IJCNN.2016.7727516","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727516","url":null,"abstract":"Previous work has shown that perceptual texture similarity and relative attributes cannot be well described by computational features. In this paper, we propose to predict human's visual perception of texture images by learning a non-linear mapping from computational feature space to perceptual space. Hand-crafted features and deep features, which were successfully applied in texture classification tasks, were extracted and used to train Random Forest and rankSVM models against perceptual data from psychophysical experiments. Three texture datasets were used to test our proposed method and the experiments show that the predictions of such learnt models are in high correlation with human's results.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130857314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Verification of fraudulent PIN holders by brain waves 通过脑电波验证欺诈性密码持有人
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-10-31 DOI: 10.1109/IJCNN.2016.7727454
Hiromichi Iwase, T. Horie, Y. Matsuyama
{"title":"Verification of fraudulent PIN holders by brain waves","authors":"Hiromichi Iwase, T. Horie, Y. Matsuyama","doi":"10.1109/IJCNN.2016.7727454","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727454","url":null,"abstract":"Brain waves, or electroencephalograms (EEGs), are applicable to user verification. We devise a two-factor system so that impersonators who hold identification numbers in fraudulence are detectable. In the first step, a subject either authentic or false tries to input a digit of a ten-key in the personal identification number by a P300 speller. The P300 speller is a brain-computer interface that detects positive voltage jump when a subject identifies specific digits on a display visually. By considering the performance of the P300 speller, we allow an error of one digit out of the four digits. On the other hand, we keep suspicion even for the case of perfect four digits because of the possibility of impersonation by a stolen case. Following the P300 spelling, we apply a verification of subjects by brain waves. Averaging of detected P300 waveforms after band-pass filtering takes the role of feature extraction. Then, a support vector machine applied to the averaged waveforms decides whether the subject is authentic or false. Thus, the total system does not entail the complexity of multimodality. For this system, we measured average error rates for 20 subjects. Experiments showed the false rejection rate of 3.9% at the false acceptance rate of 0% for the 4-digit number case. These pair values are successfully low even by using brain waves that usually contain many artifacts. Additionally, experiments on a diabetes patient before and after an insulin injection are also conducted. The result shows that the appropriate injection control maintains no difference from ordinary subjects. In concluding remarks, we consider methods to increase subjects and digits for applications in a larger society.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"71 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131456617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A deep quasi-linear kernel composition method for support vector machines 支持向量机的深度拟线性核组合方法
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-10-31 DOI: 10.1109/IJCNN.2016.7727394
Weite Li, Jinglu Hu, Benhui Chen
{"title":"A deep quasi-linear kernel composition method for support vector machines","authors":"Weite Li, Jinglu Hu, Benhui Chen","doi":"10.1109/IJCNN.2016.7727394","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727394","url":null,"abstract":"In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126909998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework 利用神经工程框架高效模拟三角帆的异联想记忆
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-10-31 DOI: 10.1109/IJCNN.2016.7727888
James C. Knight, Aaron R. Voelker, Andrew Mundy, C. Eliasmith, S. Furber
{"title":"Efficient SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework","authors":"James C. Knight, Aaron R. Voelker, Andrew Mundy, C. Eliasmith, S. Furber","doi":"10.1109/IJCNN.2016.7727888","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727888","url":null,"abstract":"The biological brain is a highly plastic system within which the efficacy and structure of synaptic connections are constantly changing in response to internal and external stimuli. While numerous models of this plastic behavior exist at various levels of abstraction, how these mechanisms allow the brain to learn meaningful values is unclear. The Neural Engineering Framework (NEF) is a hypothesis about how large-scale neural systems represent values using populations of spiking neurons, and transform them using functions implemented by the synaptic weights between populations. By exploiting the fact that these connection weight matrices are factorable, we have recently shown that static NEF models can be simulated very efficiently using the SpiNNaker neuromorphic architecture. In this paper, we demonstrate how this approach can be extended to efficiently support both supervised and unsupervised learning rules designed to operate on these factored matrices. We then present a heteroassociative memory architecture built using these learning rules and prove that it is capable of learning a human-scale semantic network. Finally we demonstrate a 100 000 neuron version of this architecture running on the SpiNNaker simulator with a speed-up exceeding 150x when compared to the Nengo reference simulator.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133290756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Wind ramp event prediction with parallelized gradient boosted regression trees 并行梯度增强回归树的风坡道事件预测
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-10-17 DOI: 10.1109/IJCNN.2016.7727900
Saurav Gupta, N. Shrivastava, A. Khosravi, B. K. Panigrahi
{"title":"Wind ramp event prediction with parallelized gradient boosted regression trees","authors":"Saurav Gupta, N. Shrivastava, A. Khosravi, B. K. Panigrahi","doi":"10.1109/IJCNN.2016.7727900","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727900","url":null,"abstract":"Accurate prediction of wind ramp events is critical for ensuring the reliability and stability of the power systems with high penetration of wind energy. This paper proposes a classification based approach for estimating the future class of wind ramp event based on certain thresholds. A parallelized gradient boosted regression tree based technique has been proposed to accurately classify the normal as well as rare extreme wind power ramp events. The model has been validated using wind power data obtained from the National Renewable Energy Laboratory database. Performance comparison with several benchmark techniques indicates the superiority of the proposed technique in terms of superior classification accuracy.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131222100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deep Random Vector Functional Link Network for handwritten character recognition 用于手写字符识别的深度随机向量功能链接网络
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-10-03 DOI: 10.1109/IJCNN.2016.7727666
H. Cecotti
{"title":"Deep Random Vector Functional Link Network for handwritten character recognition","authors":"H. Cecotti","doi":"10.1109/IJCNN.2016.7727666","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727666","url":null,"abstract":"The field of artificial neural networks has a long history of several decades, where the theoretical contributions have progressed with advances in terms of power and memory in present day computers. Some old methods are now rebranded or represented, taking advantage of the power of present day computers. More particularly, we consider the current trend of Random Vector Functional Link Networks, which suggests that the architecture of a system and the learning algorithm should be properly decoupled. In this paper, we evaluate the performance of multi-layers Random Vector Functional Link Network (RVFL)/ extreme machine learning (EML) on four databases of handwritten characters. Particularly, we evaluate the impact of the architecture (number of neurons per hidden layer), and the robustness of the distribution of the results across different runs. By combining the classifier outputs from different runs, we show that such a maximum combination rule provides an accuracy of 95.97% for Arabic digits, 98.03% for Bangla, 98.64% for Devnagari, and 96.30% for Oriya digits. The results confirm that increasing the size of the hidden layers has a significant impact on the accuracy, and allows to reach state-of-the-art performance; however the performance reaches a plateau after a certain size of the hidden layers.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131917490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信