2016 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
Simulating robotic cars using time-delay neural networks 用延时神经网络模拟机器人汽车
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727342
A. D. Souza, Jacson Rodrigues Correia-Silva, Filipe Wall Mutz, C. Badue, Thiago Oliveira-Santos
{"title":"Simulating robotic cars using time-delay neural networks","authors":"A. D. Souza, Jacson Rodrigues Correia-Silva, Filipe Wall Mutz, C. Badue, Thiago Oliveira-Santos","doi":"10.1109/IJCNN.2016.7727342","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727342","url":null,"abstract":"In this paper, we propose a simulator for robotic cars based on two time-delay neural networks. These networks are intended to simulate the mechanisms that govern how a set of effort commands changes the car's velocity and the direction it is moving. The first neural network receives as input a temporal sequence of current and previous throttle and brake efforts, along with a temporal sequence of the previous car's velocities (estimated by the network), and outputs the velocity that the real car would reach in the next time interval given these inputs. The second neural network estimates the arctangent of curvature (a variable related to the steering wheel angle) that a real car would reach in the next time interval given a temporal sequence of current and previous steering efforts and previous arctangents of curvatures of the car estimated by the network. We evaluated the performance of our simulator using real-world datasets acquired using an autonomous robotic car. Experimental results showed that our simulator was able to simulate in real time how a set of efforts influences the car's velocity and arctangent of curvature. While navigating in a map of a real-world environment, our car simulator was able to emulate the velocity and arctangent of curvature of the real car with mean squared error of 2.2×10-3 (m/s)2 and 4.0×10-5 rad2, respectively.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"60 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128636299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside vegetation classification 在超像素投票中聚合像素级预测和聚类级文本发生用于路边植被分类
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727614
Ligang Zhang, B. Verma, David R. B. Stockwell, Sujan Chowdhury
{"title":"Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside vegetation classification","authors":"Ligang Zhang, B. Verma, David R. B. Stockwell, Sujan Chowdhury","doi":"10.1109/IJCNN.2016.7727614","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727614","url":null,"abstract":"Roadside vegetation classification has recently attracted increasing attention, due to its significance in applications such as vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised classification and cluster-level texton occurrence within a voting strategy over superpixels for vegetation classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127439327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Filterbank learning for deep neural network based polyphonic sound event detection 基于深度神经网络的复音事件检测的滤波器库学习
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727634
Emre Çakir, Ezgi C. Ozan, T. Virtanen
{"title":"Filterbank learning for deep neural network based polyphonic sound event detection","authors":"Emre Çakir, Ezgi C. Ozan, T. Virtanen","doi":"10.1109/IJCNN.2016.7727634","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727634","url":null,"abstract":"Deep learning techniques such as deep feedforward neural networks and deep convolutional neural networks have recently been shown to improve the performance in sound event detection compared to traditional methods such as Gaussian mixture models. One of the key factors of this improvement is the capability of deep architectures to automatically learn higher levels of acoustic features in each layer. In this work, we aim to combine the feature learning capabilities of deep architectures with the empirical knowledge of human perception. We use the first layer of a deep neural network to learn a mapping from a high-resolution magnitude spectrum to smaller amount of frequency bands, which effectively learns a filterbank for the sound event detection task. We initialize the first hidden layer weights to match with the perceptually motivated mel filterbank magnitude response. We also integrate this initialization scheme with context windowing by using an appropriately constrained deep convolutional neural network. The proposed method does not only result with better detection accuracy, but also provides insight on the frequencies deemed essential for better discrimination of given sound events.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129991319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Echo-state conditional variational autoencoder for anomaly detection 回声状态条件变分自编码器异常检测
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727309
Suwon Suh, Daniel H. Chae, Hyon-Goo Kang, Seungjin Choi
{"title":"Echo-state conditional variational autoencoder for anomaly detection","authors":"Suwon Suh, Daniel H. Chae, Hyon-Goo Kang, Seungjin Choi","doi":"10.1109/IJCNN.2016.7727309","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727309","url":null,"abstract":"Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130058542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Tackling the ordinal and imbalance nature of a melanoma image classification problem 处理黑色素瘤图像分类问题的顺序和不平衡性质
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727466
M. Pérez-Ortiz, A. Sáez, J. Sánchez-Monedero, Pedro Antonio Gutiérrez, C. Hervás‐Martínez
{"title":"Tackling the ordinal and imbalance nature of a melanoma image classification problem","authors":"M. Pérez-Ortiz, A. Sáez, J. Sánchez-Monedero, Pedro Antonio Gutiérrez, C. Hervás‐Martínez","doi":"10.1109/IJCNN.2016.7727466","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727466","url":null,"abstract":"Melanoma is a type of cancer that usually occurs on the skin. Early detection is crucial for ensuring five-year survival (which varies between 15% and 99% depending on the melanoma stage). Melanoma severity is typically diagnosed by invasive methods (e.g. a biopsy). In this paper, we propose an alternative system combining image analysis and machine learning for detecting melanoma presence and severity. The 86 features selected consider the shape, colour, pigment network and texture of the melanoma. As opposed to previous studies that have focused on distinguishing melanoma and non-melanoma images, our work considers a finer-grain classification problem using five categories: benign lesions and 4 different stages of melanoma. The dataset presents two main characteristics that are approached by specific machine learning methods: 1) the classes representing melanoma severity follow a natural order, and 2) the dataset is imbalanced, where benign lesions clearly outnumber melanoma ones. Different nominal and ordinal classifiers are considered, one of them being based on an ordinal cascade decomposition method. The cascade method is shown to obtain good performance for all classes, while respecting and exploiting the order information. Moreover, we explore the alternative of applying a class balancing technique, presenting good synergy with the ordinal and nominal methods.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128949212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Towards emotion-based reputation guessing learning agents 基于情绪的声誉猜测学习代理
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727690
Jones Granatyr, J. P. Barddal, Adriano Weihmayer Almeida, F. Enembreck, Adaiane Pereira dos Santos Granatyr
{"title":"Towards emotion-based reputation guessing learning agents","authors":"Jones Granatyr, J. P. Barddal, Adriano Weihmayer Almeida, F. Enembreck, Adaiane Pereira dos Santos Granatyr","doi":"10.1109/IJCNN.2016.7727690","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727690","url":null,"abstract":"Trust and reputation mechanisms are part of the logical protection of intelligent agents, preventing malicious agents from acting egotistically or with the intention to damage others. Several studies in Psychology, Neurology and Anthropology claim that emotions are part of human's decision making process. However, there is a lack of understanding about how affective aspects, such as emotions, influence trust or reputation levels of intelligent agents when they are inserted into an information exchange environment, e.g. an evaluation system. In this paper we propose a reputation model that accounts for emotional bounds given by Ekman's basic emotions and inductive machine learning. Our proposal is evaluated by extracting emotions from texts provided by two online human-fed evaluation systems. Empirical results show significant agent's utility improvements with p <; .05 when compared to non-emotion-wise proposals, thus, showing the need for future research in this area.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122356342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pain recognition and intensity classification using facial expressions 基于面部表情的疼痛识别和强度分类
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727659
W. A. Shier, S. Yanushkevich
{"title":"Pain recognition and intensity classification using facial expressions","authors":"W. A. Shier, S. Yanushkevich","doi":"10.1109/IJCNN.2016.7727659","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727659","url":null,"abstract":"Facial biometrics, specifically facial expression analysis, is one of the most actively investigated topics towards the creation of an automated system capable of detecting and classifying pain in human subjects. This paper presents a comparative analysis of Gabor energy filter based approaches combined with powerful classifiers, such as Support Vector Machines, for pain detection and classification into three levels. The intensity of pain is labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the levels of intensity have been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters provide comparable or better results compared to previous filter-based pain recognition methods, with a 74% classification rate of pain versus no pain, and 74%, 30% and 78% precision rates when distinguishing pain into no pain, weak pain and strong pain respectively.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131981048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Training deep neural networks on imbalanced data sets 在不平衡数据集上训练深度神经网络
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727770
Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, Paul J. Kennedy
{"title":"Training deep neural networks on imbalanced data sets","authors":"Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, Paul J. Kennedy","doi":"10.1109/IJCNN.2016.7727770","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727770","url":null,"abstract":"Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130820290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 330
A study of an incremental spectral meta-learner for nonstationary environments 非平稳环境下增量谱元学习器的研究
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727178
G. Ditzler
{"title":"A study of an incremental spectral meta-learner for nonstationary environments","authors":"G. Ditzler","doi":"10.1109/IJCNN.2016.7727178","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727178","url":null,"abstract":"Incrementally learning from large volumes of streaming data over time is a problem that is of crucial importance to the computational intelligence community, especially in scenarios where it is impractical or simply unfeasible to store all historical data. Learning becomes a particularly challenging problem when the probabilistic properties of the data are changing with time (i.e., gradual, abrupt, etc.), and there is scarce availability of class labels. Many existing strategies for learning in nonstationary environments use the most recent batch of training data to tune their parameters (e.g., calculate classifier voting weights), and never reassess these parameters when the unlabeled test data arrive. Making a limited drift assumption is generally one way to justify not needing to re-evaluate the parameters of a classifiers; however, labeled data that have already been learned if presented to the classifier for testing could be forgotten because the data was not observed for a long time. This is one form of abrupt concept drift with unlabeled data. In this work, an incremental spectral learning meta-classifier is presented for learning in nonstationary environments such that: (i) new classifiers can be added into an ensemble when labeled data are available, (ii) the ensemble voting weights are determined from the unlabeled test data to boost recollection of previously learned distributions of data, and (iii) the limited drift assumption is removed from the test-then-train evaluation paradigm. We benchmark our proposed approach on several widely used concept drift data sets.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130904435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Extended Kalman filter under maximum correntropy criterion 最大熵准则下的扩展卡尔曼滤波
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727408
Xi Liu, Hua Qu, Ji-hong Zhao, Badong Chen
{"title":"Extended Kalman filter under maximum correntropy criterion","authors":"Xi Liu, Hua Qu, Ji-hong Zhao, Badong Chen","doi":"10.1109/IJCNN.2016.7727408","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727408","url":null,"abstract":"As a nonlinear extension of Kalman filter, the extended Kalman filter (EKF) is also based on the minimum mean square error (MMSE) criterion. In general, the EKF performs well in Gaussian noises. But its performance may deteriorate substantially when the system is disturbed by heavy-tailed impulsive noises. In order to improve the robustness of EKF against impulsive noises, a new filter for nonlinear systems is proposed in this paper, namely the maximum correntropy extended Kalman filter (MCEKF), which adopts the maximum correntropy criterion (MCC) as the optimization criterion instead of using the MMSE. In MCEKF, the state mean and covariance matrix propagation equation are used to obtain a prior estimation of the state and covariance matrix, and then a fixed-point algorithm is used to update the posterior estimates. The robustness of the new filter is confirmed by simulation results.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130458191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信