2016 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
Transfer learning for image classification with incomplete multiple sources 不完全多源图像分类的迁移学习
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727470
Zhengming Ding, Ming Shao, Y. Fu
{"title":"Transfer learning for image classification with incomplete multiple sources","authors":"Zhengming Ding, Ming Shao, Y. Fu","doi":"10.1109/IJCNN.2016.7727470","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727470","url":null,"abstract":"Transfer learning plays a powerful role in mitigating the discrepancy between test data (target) and auxiliary data (source). There is often the case that multiple sources are available in transfer learning. However, naively combining multiple sources does not lead to valid results, since they will introduce negative transfer as well. Furthermore, each single source from multiple sources may not cover all the labels of the target data. In this paper, we consider the problem that how to better utilize multiple incomplete sources for effective knowledge transfer. To this end, we propose a Bi-directional Low-Rank Transfer learning framework (BLRT). First, we adapt the conventional low-rank transfer learning to multiple sources knowledge transfer scenario. Second, an iterative structure learning is proposed to better use prior knowledge for transfer learning coefficients matrix. Third, a cross-source regularizer is added to couple the same labels from multiple incomplete sources, so that they could jointly compensate missing data from other sources. Experimental results on three groups of databases including face and object images have demonstrated that our method can successfully inherit knowledge from incomplete multiple sources and adapt to the target data successfully.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130468134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Spectral analysis and artificial neural network based classification of three mental states for brain machine interface applications 基于频谱分析和人工神经网络的三种心理状态分类在脑机接口中的应用
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727456
Trongmun Jiralerspong, Sato Fumiya, Chao Liu, J. Ishikawa
{"title":"Spectral analysis and artificial neural network based classification of three mental states for brain machine interface applications","authors":"Trongmun Jiralerspong, Sato Fumiya, Chao Liu, J. Ishikawa","doi":"10.1109/IJCNN.2016.7727456","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727456","url":null,"abstract":"Brain machine interface (BMI) is an emerging technology that aims to assist people with disabilities as well as the aged by allowing their users to intuitively control external devices by intent alone. This paper presents a signal processing technique for a low cost brain machine interface (BMI) that uses spectral analysis and artificial neural network (ANN) to classify three mental states from electroencephalographic (EEG) signals. In this study, a BMI system has been prototyped to classify the intention of moving an object up or down and at rest state. EEG signals are recorded using a consumer grade EEG acquisition device. The device is equipped with 14 electrodes but only 8 electrodes are used in this study. To evaluate the system performance, online classification experiments for three subjects are conducted. True positive and false positive rates are used as an evaluation index. Experiment results show that despite the high difficulty of the mental tasks, the proposed method is capable of achieving an overall true positive rate of up to 67% with 15 minutes of training time by a first time BMI user. Furthermore, offline analysis is carried out using the same EEG data to explore ways of using spectral analysis and ANN to reduce erroneous classifications. Analysis results show that by setting the classification threshold value higher, the false positive rate can be reduced. Another finding suggests that in contrast with the study results by other research teams, the use of multiple ANNs to classify three mental states do not improve the accuracy. Lastly, a hamming window size of 64 samples is found to be optimal for achieving real-time control when performing spectral analysis.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125391317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep Adaptive Resonance Theory for learning biologically inspired episodic memory 学习生物启发情景记忆的深度自适应共振理论
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727883
Gyeong-Moon Park, Jong-Hwan Kim
{"title":"Deep Adaptive Resonance Theory for learning biologically inspired episodic memory","authors":"Gyeong-Moon Park, Jong-Hwan Kim","doi":"10.1109/IJCNN.2016.7727883","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727883","url":null,"abstract":"Biologically inspired episodic memory is able to store time sequential events, and to recall all of them from partial information. Because of the advantages of episodic memory, the biological concepts of episodic memory have been utilized to many applications. In this research, we propose a new memory model, called Deep ART (Adaptive Resonance Theory), to make a robust memory system for learning episodic memory. Deep ART has an attribute field in the bottom layer, which is newly designed to get semantic information of inputs. After encoding all inputs with their features, events are categorized in the event field using specified inputs. Since an episode is made of a temporal sequence of events, Deep ART makes event sequences with proposed sequence encoding and decoding processes. They can encode any temporal sequence of events, even if there are duplicated events in the episode. Moreover, based on the result of the analysis of retrieval error, Deep ART does not use the complement coding for partial inputs to enhance the accuracy of episode retrieval from partial cues. The simulation results demonstrate the effectiveness of Deep ART as the long term memory.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125409734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Node label matching improves classification performance in Deep Belief Networks 节点标签匹配提高了深度信念网络的分类性能
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727395
Allan Campbell, V. Ciesielski, A. K. Qin
{"title":"Node label matching improves classification performance in Deep Belief Networks","authors":"Allan Campbell, V. Ciesielski, A. K. Qin","doi":"10.1109/IJCNN.2016.7727395","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727395","url":null,"abstract":"If output signals of artificial neural network classifiers are interpreted per node as class label predictors then partial knowledge encoded by the network during the learning procedure can be exploited in order to reassign which output node should represent each class label so that learning speed and final classification accuracy are improved. Our method for computing these reassignments is based on the maximum average correlation between actual node outputs and target labels over a small labeled validation dataset. Node Label Matching is an ancillary method for both supervised and unsupervised learning in artificial neural networks and we demonstrate its integration with Contrastive Divergence pre-training in Restricted Boltzmann Machines and Back Propagation fine-tuning in Deep Belief Networks. We introduce the Segmented Density Random Binary dataset and present empirical results of Node Label Matching on both our synthetic data and a subset of the MNIST benchmark.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125423604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-clustering enterprise social networks 协同集群企业社交网络
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727187
Ruiqi Hu, Shirui Pan, Guodong Long, Xingquan Zhu, Jing Jiang, Chengqi Zhang
{"title":"Co-clustering enterprise social networks","authors":"Ruiqi Hu, Shirui Pan, Guodong Long, Xingquan Zhu, Jing Jiang, Chengqi Zhang","doi":"10.1109/IJCNN.2016.7727187","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727187","url":null,"abstract":"An enterprise social network (ESN) involves diversified user groups from producers, suppliers, logistics, to end consumers, and users have different scales, broad interests, and various objectives, such as advertising, branding, customer relationship management etc. In addition, such a highly diversified network is also featured with rich content, including recruiting messages, advertisements, news release, customer complains etc. Due to such complex nature, an immediate need is to properly organize a chaotic enterprise social network as functional groups, where each group corresponds to a set of peers with business interactions and common objectives, and further understand the business role of each group, such as their common interests and key features differing from other groups. In this paper, we argue that due to unique characteristics of enterprise social networks, simple clustering for ESN nodes or using existing topic discovery methods cannot effectively discover functional groups and understand their roles. Alternatively, we propose CENFLD, which carries out co-clustering on enterprise social networks for functional group discovery and understanding. CENFLD is a co-factorization based framework which combines network topology structures and rich content information, including interactions between nodes and correlations between node content, to discover functional user groups. Because the number of functional groups is highly data driven and hard to estimate, CENFLD employs a hold-out test principle to find the group number optimally complying with the underlying data. Experiments and comparisons, with state-of-the-art approaches, on 13 real-world enterprise/organizational networks validate the performance of CENFLD.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126636064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Heuristic dynamic programming for mobile robot path planning based on Dyna approach 基于Dyna方法的启发式动态规划移动机器人路径规划
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727679
Seaar Al Dabooni, D. Wunsch
{"title":"Heuristic dynamic programming for mobile robot path planning based on Dyna approach","authors":"Seaar Al Dabooni, D. Wunsch","doi":"10.1109/IJCNN.2016.7727679","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727679","url":null,"abstract":"This paper presents a direct heuristic dynamic programming (HDP) based on Dyna planning (Dyna_HDP) for online model learning in a Markov decision process. This novel technique is composed of HDP policy learning to construct the Dyna agent for speeding up the learning time. We evaluate Dyna_HDP on a differential-drive wheeled mobile robot navigation problem in a 2D maze. The simulation is introduced to compare Dyna_HDP with other traditional reinforcement learning algorithms, namely one step Q-learning, Sarsa (λ), and Dyna_Q, under the same benchmark conditions. We demonstrate that Dyna_HDP has a faster near-optimal path than other algorithms, with high stability. In addition, we also confirm that the Dyna_HDP method can be applied in a multi-robot path planning problem. The virtual common environment model is learned from sharing the robots' experiences which significantly reduces the learning time.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126760257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Hyperbolic linear units for deep convolutional neural networks 深度卷积神经网络的双曲线性单元
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727220
Jia Li, Hua Xu, Junhui Deng, Xiaomin Sun
{"title":"Hyperbolic linear units for deep convolutional neural networks","authors":"Jia Li, Hua Xu, Junhui Deng, Xiaomin Sun","doi":"10.1109/IJCNN.2016.7727220","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727220","url":null,"abstract":"Recently, rectified linear units (ReLUs) have been used to solve the vanishing gradient problem. Their use has led to state-of-the-art results in various problems such as image classification. In this paper, we propose the hyperbolic linear units (HLUs) which not only speed up learning process in deep convolutional neural networks but also obtain better performance in image classification tasks. Unlike ReLUs, HLUs have inheriently negative values which could make mean unit outputs closer to zero. Mean unit outputs close to zero means we can speed up the learning process because they bring the normal gradient close to the natural gradient. Indeed, the difference called bias shift between natural gradient and the normal gradient is related to the mean activation of input units. Experiments with three popular CNN architectures, LeNet, Inception network and ResNet on various benchmarks including MNIST, CIFAR-10 and CIFAR-100 demonstrate that our proposed HLUs achieve significant improvement compared to other commonly used activation functions1.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126226116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast Entropy Clustering of sparse high dimensional binary data 稀疏高维二值数据的快速熵聚类
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727497
Marek Śmieja, S. Nakoneczny, J. Tabor
{"title":"Fast Entropy Clustering of sparse high dimensional binary data","authors":"Marek Śmieja, S. Nakoneczny, J. Tabor","doi":"10.1109/IJCNN.2016.7727497","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727497","url":null,"abstract":"We introduce Sparse Entropy Clustering (SEC) which uses minimum entropy criterion to split high dimensional binary vectors into groups. The idea is based on the analogy between clustering and data compression: every group is reflected by a single encoder which provides its optimal compression. Following the Minimum Description Length Principle the clustering criterion function includes the cost of encoding the elements within clusters as well as the cost of clusters identification. Proposed model is adopted to the sparse structure of data - instead of encoding all coordinates, only non-zero ones are remembered which significantly reduces the computational cost of data processing. Our theoretical and experimental analysis proves that SEC works well with imbalance data, minimizes the average entropy within clusters and is able to select the correct number of clusters.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121325001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How complex-valued multilayer perceptron can predict the behavior of deterministic chaos 复值多层感知器如何预测确定性混沌的行为
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727736
Seiya Satoh, R. Nakano
{"title":"How complex-valued multilayer perceptron can predict the behavior of deterministic chaos","authors":"Seiya Satoh, R. Nakano","doi":"10.1109/IJCNN.2016.7727736","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727736","url":null,"abstract":"A complex-valued multilayer perceptron has the capability to represent complicated periodicity. We employ a very powerful learning method called C-SSF for learning a complex-valued multilayer perceptron. C-SSF finds a series of excellent solutions through successive learning. In deterministic chaos, long-term prediction is considered impossible. We apply C-SSF to two kinds of deterministic chaos and evaluate the learning and prediction performance of C-SSF.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123089291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving Spiking Neural Networks of artificial creatures using Genetic Algorithm 利用遗传算法进化人工生物的脉冲神经网络
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727228
E. Eskandari, A. Ahmadi, S. Gomar, M. Ahmadi, M. Saif
{"title":"Evolving Spiking Neural Networks of artificial creatures using Genetic Algorithm","authors":"E. Eskandari, A. Ahmadi, S. Gomar, M. Ahmadi, M. Saif","doi":"10.1109/IJCNN.2016.7727228","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727228","url":null,"abstract":"This paper presents a Genetic Algorithm (GA) based evolution framework in which Spiking Neural Network (SNN) of single or a colony of artificial creatures are evolved for higher chance of survival in a virtual environment. The artificial creatures are composed of randomly connected Izhikevich spiking reservoir neural networks. Inspired by biological neurons, the neuronal connections are considered with different axonal conduction delays. Simulation results prove that the evolutionary algorithm has the capability to find or synthesis artificial creatures which can survive in the environment successfully and also simulations verify that colony approach has a better performance in comparison with a single complex creature.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126339734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信