2016 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
Relational Fisher Analysis: A general framework for dimensionality reduction 关系费雪分析:降维的一般框架
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727477
G. Zhong, Yaxin Shi, M. Cheriet
{"title":"Relational Fisher Analysis: A general framework for dimensionality reduction","authors":"G. Zhong, Yaxin Shi, M. Cheriet","doi":"10.1109/IJCNN.2016.7727477","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727477","url":null,"abstract":"In this paper, we propose a novel and general framework for dimensionality reduction, called Relational Fisher Analysis (RFA). Unlike traditional dimensionality reduction methods, such as linear discriminant analysis (LDA) and marginal Fisher analysis (MFA), RFA seamlessly integrates relational information among data into the representation learning framework, which in general provides strong evidence for related data to belong to the same class. To address nonlinear dimensionality reduction problems, we extend RFA to its kernel version. Furthermore, the convergence of RFA is also proved in this paper. Extensive experiments on documents understanding and recognition, face recognition and other applications from the UCI machine learning repository demonstrate the effectiveness and efficiency of RFA.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"282 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114851097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A novel homeostatic plasticity model realized by random fluctuations in excitatory synapses 一种由兴奋性突触随机波动实现的新稳态可塑性模型
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727315
Takashi Matsubara, K. Uehara
{"title":"A novel homeostatic plasticity model realized by random fluctuations in excitatory synapses","authors":"Takashi Matsubara, K. Uehara","doi":"10.1109/IJCNN.2016.7727315","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727315","url":null,"abstract":"Homeostatic plasticity in mammalian central nervous system is considered to maintain activity in neuronal circuits within a functional range. In the absence of homeostatic plasticity neuronal activity is prone to be destabilized because correlation-based synaptic modification, Hebbian plasticity, induces positive feedback change. Several studies on homeostatic plasticity assumed the existence of a process for monitoring neuronal activity and adjusting synaptic efficacy on a time scale of hours, but its biological mechanism still remains unclear. Excitatory synaptic efficacy is associated with the size of a post-synaptic element, dendritic spine, and the size of the dendritic spine fluctuates even after neuronal activity is silenced. These fluctuations could be a non-Hebbian form of synaptic plasticity that serves such a homeostatic function. This study proposed and analyzed a synaptic plasticity model incorporating random fluctuations and Hebbian plasticity at excitatory synapses, and found that it prevents excessive changes in neuronal activity by adjusting synaptic efficacy. Random fluctuations do not monitor neuronal activity, but their relative influence depends on neuronal activity. The proposed synaptic plasticity model acts as a form of homeostatic plasticity, regardless of neuronal activity monitoring. Thus, random fluctuations play an important role in homeostatic plasticity and contribute to development and functions of neural networks.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121944777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sonar-based place recognition using joint sparse coding method 基于声纳的位置识别联合稀疏编码方法
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727701
Xiangmei Zheng, Huaping Liu, F. Sun, Meng Gao, Jiakui Li, Qing Zhang
{"title":"Sonar-based place recognition using joint sparse coding method","authors":"Xiangmei Zheng, Huaping Liu, F. Sun, Meng Gao, Jiakui Li, Qing Zhang","doi":"10.1109/IJCNN.2016.7727701","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727701","url":null,"abstract":"The problem of place recognition is central to robot navigation. The robot needs to be able to recognize or at least to be able to estimate the likelihood that it has been at a place before when it has returned to a previously visited place. We cast the place recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sparse signal representation offers the key to addressing the problem. In this paper, a joint kernel sparse coding model is developed to tackle the multivariate sonar samples place recognition problem. The experimental results show that the joint sparse coding achieves better performance than 1-Nearest Neighborhood (1-NN) method.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121965006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On the energy benefits of spiking deep neural networks: A case study 关于脉冲深度神经网络的能量效益:一个案例研究
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727303
Bing Han, Abhronil Sengupta, K. Roy
{"title":"On the energy benefits of spiking deep neural networks: A case study","authors":"Bing Han, Abhronil Sengupta, K. Roy","doi":"10.1109/IJCNN.2016.7727303","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727303","url":null,"abstract":"Deep learning neural networks have achieved success in a large number of visual processing tasks and are currently utilized for many real-world applications like image search and speech recognition among others. However, in spite of achieving high accuracy in such classification problems, they involve significant computational resources. Over the past few years, artificial neural network models have evolved into the biologically realistic and event-driven spiking neural networks. Recent research efforts have been directed at developing mechanisms to convert traditional deep artificial nets to spiking nets where the neurons communicate by means of spikes. However, there have been limited studies providing insights on the specific power, area and energy benefits offered by deep spiking neural nets in comparison to their non-spiking counterparts. In this paper, we perform a case study for a hardware implementation of a spiking/non-spiking deep net on the MNIST dataset and clearly outline the design prospects involved in implementing neural computing platforms in the spiking mode of operation.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128201665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Background modeling on depth video sequences using self-organizing retinotopic maps 基于自组织视网膜定位图的深度视频序列背景建模
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727319
M. Murguia, Oscar Alejandro Chavez-Montes, J. Ramirez-Quintana
{"title":"Background modeling on depth video sequences using self-organizing retinotopic maps","authors":"M. Murguia, Oscar Alejandro Chavez-Montes, J. Ramirez-Quintana","doi":"10.1109/IJCNN.2016.7727319","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727319","url":null,"abstract":"A depth video background modeling method based on a cascade retinotopic map is presented in this paper. The proposed scheme is intended to be used in a virtual therapy system to attend children with different restricted arm movement. The proposed method involves two retinotopic maps with different background modeling capacities. The neural network scheme was tested under several laboratory and real world therapy scenarios. Regarding the findings, the model achieved an acceptable performance on both types of scenarios, and showed its potential to solve bootstrapping conditions, deal with different patient body positions, and visual interference commonly found in therapy sessions.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121449893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiple adaptive kernel size KLMS for Beijing PM2.5 prediction 北京PM2.5预测的多自适应核大小KLMS
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727362
Zheng Cao, Shujian Yu, Guibiao Xu, Badong Chen, J. Príncipe
{"title":"Multiple adaptive kernel size KLMS for Beijing PM2.5 prediction","authors":"Zheng Cao, Shujian Yu, Guibiao Xu, Badong Chen, J. Príncipe","doi":"10.1109/IJCNN.2016.7727362","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727362","url":null,"abstract":"The kernel least mean square (KLMS) algorithm is an efficient non-linear adaptive filter that operates in the reproducing kernel Hilbert space (RKHS). In realistic applications of system identification or time series prediction, there are usually multiple inputs that demand multiple kernels or kernel parameters. This paper proposes to use a tensor product kernel for KLMS that accommodates multiple inputs. Furthermore, instead of arbitrarily setting kernel parameters, appropriate kernel sizes can be chosen by a gradient descent based adaptive algorithm that minimizes the square of instant error, which helps KLMS to better capture the underlying system mechanism. Effectiveness of the proposed algorithm is shown by experiments conducted for both simulated dataset and an important real-world problem - Beijing PM2.5 prediction.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132134737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Hidden Markov model inference in an Associative memory architecture 联想记忆体系结构中的隐马尔可夫模型推理
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727458
J. Poikonen, M. Laiho, E. Lehtonen, T. Knuutila
{"title":"Hidden Markov model inference in an Associative memory architecture","authors":"J. Poikonen, M. Laiho, E. Lehtonen, T. Knuutila","doi":"10.1109/IJCNN.2016.7727458","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727458","url":null,"abstract":"We describe how an analog associative memory architecture can be used to infer time-variant states of a hidden Markov model (HMM), demonstrating computation of forward probabilities, backward probabilities, smoothing, and the Viterbi algorithm. This type of computing is well suited for implementation in array computing architectures, and the necessary elementary computations have been demonstrated in previous work. We consider the effect of limited accuracy in storage and computing elements on the inference reliability, and demonstrate that although the computation is iterative, reliable operation is not significantly affected by error accumulation.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132353161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed variance regularized Multitask Learning 分布方差正则化多任务学习
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727594
Michele Donini, David Martínez-Rego, Martin Goodson, J. Shawe-Taylor, M. Pontil
{"title":"Distributed variance regularized Multitask Learning","authors":"Michele Donini, David Martínez-Rego, Martin Goodson, J. Shawe-Taylor, M. Pontil","doi":"10.1109/IJCNN.2016.7727594","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727594","url":null,"abstract":"Past research on Multitask Learning (MTL) has focused mainly on devising adequate regularizers and less on their scalability. In this paper, we present a method to scale up MTL methods which penalize the variance of the task weight vectors. The method builds upon the alternating direction method of multipliers to decouple the variance regularizer. It can be efficiently implemented by a distributed algorithm, in which the tasks are first independently solved and subsequently corrected to pool information from other tasks. We show that the method works well in practice and convergences in few distributed iterations. Furthermore, we empirically observe that the number of iterations is nearly independent of the number of tasks, yielding a computational gain of O(T) over standard solvers. We also present experiments on a large URL classification dataset, which is challenging both in terms of volume of data points and dimensionality. Our results confirm that MTL can obtain superior performance over either learning a common model or independent task learning.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130001561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A multiagent reinforcement learning approach to en-route trip building 一种基于多智能体强化学习的途中行程构建方法
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727899
A. Bazzan, Ricardo Grunitzki
{"title":"A multiagent reinforcement learning approach to en-route trip building","authors":"A. Bazzan, Ricardo Grunitzki","doi":"10.1109/IJCNN.2016.7727899","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727899","url":null,"abstract":"An important stage in traffic planning is traffic assignment, which seeks to reproduce the way drivers select their routes. It assumes that each driver is aware of a number of routes to travel from an origin to a destination, that it performs some experimentation, and that it selects rationally the route with the highest utility. This is the basis for many approaches that, in an iterative way, vary the combination of route choices in order to find one that maximizes the utility. This perspective is therefore a centralized, aggregate one. In reality, though, drivers may perform en-route experimentation, i.e., they deviate from the originally planned route. Thus, in this paper, individual drivers are considered as active and autonomous agents, which, instead of having a central entity assigning complete trips to each agent, build these trips by experimentation during the actual trip. Agents learn their routes by deciding, at each node, how to continue their trips to each one's destination, in a way to minimize their travel times. Because the choice of one agent does impact several others, this is a non-cooperative multiagent learning problem (thus stochastic), which is known for being much more challenging than single agent reinforcement learning. To illustrate this approach, results from two non-trivial networks are presented, which have thousands of learning agents, clearly configuring a hard learning problem. Results are compared to iterative, centralized methods. It is concluded that an agent-based perspective yields choices that are more aligned with the real-world situation because (i) trips are computed by the agent itself (and not provided to the agent by any central entity), and (ii) it is not based on pre-computed paths (rather, it is built during the trip itself).","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130109007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Feature leaning with deep Convolutional Neural Networks for screening patients with paroxysmal atrial fibrillation 深度卷积神经网络特征学习在阵发性房颤筛查中的应用
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727866
B. Pourbabaee, M. J. Roshtkhari, K. Khorasani
{"title":"Feature leaning with deep Convolutional Neural Networks for screening patients with paroxysmal atrial fibrillation","authors":"B. Pourbabaee, M. J. Roshtkhari, K. Khorasani","doi":"10.1109/IJCNN.2016.7727866","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727866","url":null,"abstract":"In this paper, a novel electrocardiogram (ECG) signal classification and patient screening method is developed. The focus is on identifying patients with paroxysmal atrial fibrillation (PAF) which is a life threatening cardiac arrhythmia. The proposed approach uses the raw ECG signal as the input and automatically learns the representative features for PAF to be used by a classification mechanism. The features are learned directly from the time domain ECG signals by using a Convolutional Neural Network (CNN) with one fully connected layer. The learned features can replace the hand-crafted features and our experimental results indicate the effectiveness of the learned features in patient screening. The experimental results indicate that combining the learned features with other classifiers will improve the performance of the patient screening system as compared to an End-to-End convolutional neural network classifier. The major characteristics of the proposed approach are to simplify the process of feature extraction for different cardiac arrhythmias and to remove the need for using a human expert to specify the appropriate features. The effectiveness of the proposed ECG classification method is demonstrated through performing extensive simulation studies.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130140783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信