2016 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
FEDD: Feature Extraction for Explicit Concept Drift Detection in time series 时间序列中显式概念漂移检测的特征提取
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-29 DOI: 10.1109/IJCNN.2016.7727274
R. C. Cavalcante, Leandro L. Minku, Adriano Oliveira
{"title":"FEDD: Feature Extraction for Explicit Concept Drift Detection in time series","authors":"R. C. Cavalcante, Leandro L. Minku, Adriano Oliveira","doi":"10.1109/IJCNN.2016.7727274","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727274","url":null,"abstract":"A time series is a sequence of observations collected over fixed sampling intervals. Several real-world dynamic processes can be modeled as a time series, such as stock price movements, exchange rates, temperatures, among others. As a special kind of data stream, a time series may present concept drift, which affects negatively time series analysis and forecasting. Explicit drift detection methods based on monitoring the time series features may provide a better understanding of how concepts evolve over time than methods based on monitoring the forecasting error of a base predictor. In this paper, we propose an online explicit drift detection method that identifies concept drifts in time series by monitoring time series features, called Feature Extraction for Explicit Concept Drift Detection (FEDD). Computational experiments showed that FEDD performed better than error-based approaches in several linear and nonlinear artificial time series with abrupt and gradual concept drifts.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116894264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
A dynamic self-structuring neural network model to combat phishing 一种对抗网络钓鱼的动态自结构神经网络模型
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-28 DOI: 10.1109/IJCNN.2016.7727750
F. Thabtah, R. Mohammad, L. Mccluskey
{"title":"A dynamic self-structuring neural network model to combat phishing","authors":"F. Thabtah, R. Mohammad, L. Mccluskey","doi":"10.1109/IJCNN.2016.7727750","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727750","url":null,"abstract":"Creating a neural network based classification model is commonly accomplished using the trial and error technique. However, this technique has several difficulties in terms of time wasted and the availability of experts. In this article, an algorithm that simplifies structuring neural network classification models is proposed. The algorithm aims at creating a large enough structure to learn models from the training dataset that can be generalised on the testing dataset. Our algorithm dynamically tunes the structure parameters during the training phase aiming to derive accurate non-overfitting classifiers. The proposed algorithm has been applied to phishing website classification problem and it shows competitive results with respect to various evaluation measures such as harmonic mean (F1-score), precision, and classification accuracy.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122802584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Complex social network partition for balanced subnetworks 复杂的社会网络分区,实现均衡的子网
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727744
H. Zhang, Jiming Liu, Chunyu Feng, C. Pang, Tongliang Li, Jing He
{"title":"Complex social network partition for balanced subnetworks","authors":"H. Zhang, Jiming Liu, Chunyu Feng, C. Pang, Tongliang Li, Jing He","doi":"10.1109/IJCNN.2016.7727744","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727744","url":null,"abstract":"Complex social network analysis methods have been applied extensively in various domains including online social media, biological complex networks, etc. Complex social networks are facing the challenge of information overload. The demands for efficient complex network analysis methods have been rising in recent years, particularly the extensive use of online social applications, such as Flickr, Facebook and LinkedIn. This paper aims to simplify the network complexity through partitioning a large complex network into a set of less complex networks. Existing social network analysis methods are mainly based on complex network theory and data mining techniques. These methods are facing the challenges while dealing with extreme large social network data sets. Particularly, the difficulties of maintaining the statistical characteristics of partitioned sub-networks have been increasing dramatically. The proposed Normal Distribution (ND) based method can balance the distribution of the partitioned sub-networks according to the original complex network. Therefore, each subnetwork can have its degree distribution similar to that of the original network. This can be very beneficial for analyzing sub-divided networks and potentially reducing the complexity in dynamic online social environment.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114677350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimization of an electro-optical representation of the C. elegans connectome through neural network cluster analysis 利用神经网络聚类分析优化秀丽隐杆线虫连接体的光电表征
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727828
A. Petrushin, L. Ferrara, A. Blau
{"title":"Optimization of an electro-optical representation of the C. elegans connectome through neural network cluster analysis","authors":"A. Petrushin, L. Ferrara, A. Blau","doi":"10.1109/IJCNN.2016.7727828","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727828","url":null,"abstract":"Using C. elegans as a model organism, we present on an optimization strategy for reducing the spatial needs and power consumption in an optical connectome implementation. By means of a cluster analysis algorithm1, the interconnectivity of 279 neurons can be subdivided into 3 groups. This clustering reveals 2 independent neural populations, whose members interconnect only within their cluster-community and through a relay group of inter-cluster connections. Using this strategy, the expected spatial needs could be cut down by one fourth, thereby reducing the required light intensities by the same amount. A follow-up sub-partitioning of the individual clusters led to an additional power saving of up to 7%.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114643379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A network-based approach to detect spammer groups 一种基于网络的方法来检测垃圾邮件发送组
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727668
Q. Do, Alexey Zhilin, Caibre Zordan Pio Junior, Gaoxiang Wang, F. Hussain
{"title":"A network-based approach to detect spammer groups","authors":"Q. Do, Alexey Zhilin, Caibre Zordan Pio Junior, Gaoxiang Wang, F. Hussain","doi":"10.1109/IJCNN.2016.7727668","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727668","url":null,"abstract":"Online reviews nowadays are an important source of information for consumers to evaluate online services and products before deciding which product and which provider to choose. Therefore, online reviews have significant power to influence consumers' purchase decisions. Being aware of this, an increasing number of companies have organized spammer review campaigns, in order to promote their products and gain an advantage over their competitors by manipulating and misleading consumers. To make sure the Internet remains a reliable source of information, we propose a method to identify both individual and group spamming reviews by assigning a suspicion score to each user. The proposed method is a network-based approach combining clustering techniques. We demonstrate the efficiency and effectiveness of our approach on a real-world and manipulated dataset that contains over 8000 restaurants and 600,000 restaurant reviews from TripAdvisor website. We tested our method in three testing scenarios. The method was able to detect all spammers in two testing scenarios, however it did not detect all in the last scenario.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116960109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An inductive semi-supervised learning approach for the Local and Global Consistency algorithm 局部和全局一致性算法的归纳半监督学习方法
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727722
C. A. R. Sousa
{"title":"An inductive semi-supervised learning approach for the Local and Global Consistency algorithm","authors":"C. A. R. Sousa","doi":"10.1109/IJCNN.2016.7727722","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727722","url":null,"abstract":"Graph-based semi-supervised learning (SSL) algorithms learn through a weighted graph generated from both labeled and unlabeled examples. Despite the effectiveness of these methods on a variety of application domains, most of them are transductive in nature. Therefore, they are uncapable to provide generalization for the entire sample space. One of the most effective graph-based SSL algorithms is the Local and Global Consistency (LGC), which is formulated as a convex optimization problem that balances fitness on labeled examples and smoothness on the weighted graph through a Laplacian regularizer term. In this paper, we provide a novel inductive procedure for the LGC algorithm, called Inductive Local and Global Consistency (iLGC). Through experiments on inductive SSL using a variety of benchmark data sets, we show that our method is competitive with the commonly used Nadaraya-Watson kernel regression when applying the LGC algorithm as basis classifier.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117149389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An experimental study on joint modeling of mixed-bandwidth data via deep neural networks for robust speech recognition 基于深度神经网络的混合带宽数据联合建模的鲁棒语音识别实验研究
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727253
Jianqing Gao, Jun Du, Changqing Kong, Huaifang Lu, Enhong Chen, Chin-Hui Lee
{"title":"An experimental study on joint modeling of mixed-bandwidth data via deep neural networks for robust speech recognition","authors":"Jianqing Gao, Jun Du, Changqing Kong, Huaifang Lu, Enhong Chen, Chin-Hui Lee","doi":"10.1109/IJCNN.2016.7727253","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727253","url":null,"abstract":"We propose joint modeling strategies leveraging upon large-scale mixed-band training speech for recognition of both narrowband and wideband data based on deep neural networks (DNNs). We utilize conventional down-sampling and up-sampling schemes to go between narrowband and wideband data. We also explore DNN-based speech bandwidth expansion (BWE) to map some acoustic features from narrowband to wideband speech. By arranging narrowband and wideband features at the input or the output level of BWE-DNN, and combining down-sampling and up-sampling data, different DNNs can be established. Our experiments on a Mandarin speech recognition task show that the hybrid DNNs for joint modeling of mixed-band speech yield significant performance gains over both the narrowband and wideband speech models, well-trained separately, with a relative character error rate reduction of 7.9% and 3.9% on narrowband and wideband data, respectively. Furthermore, the proposed strategies also consistently outperform other conventional DNN-based methods.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"2018 35","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120848977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Tagging Chinese microblogger via sparse feature selection 基于稀疏特征选择的中文微博标签
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727505
R. Shang, Xinyu Dai, Shujian Huang, Yi Li, Jiajun Chen
{"title":"Tagging Chinese microblogger via sparse feature selection","authors":"R. Shang, Xinyu Dai, Shujian Huang, Yi Li, Jiajun Chen","doi":"10.1109/IJCNN.2016.7727505","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727505","url":null,"abstract":"In new media era, users post messages to record their daily lives and express their opinions via social media platforms, such as microblog. Recently, it is an attractive topic to tag users from the users generation contents. Tags for a microblog user, as the description for his/her interests, concerns or occupational characteristics, are playing an important role in user indexing, personalized recommendation, and so on. Previous works apply keyword extraction methods to present the interests of users. However, it is hard for keyword extraction to give accurate results when the data is deficient and noisy. In this paper, we propose a novel method to tag the users. Firstly, we apply feature selection via sparse classifier to generate preliminary tags for users. Then we also apply feature selection method to extend the tags. Finally, we refine the tags with a reranking strategy. We conduct our experiments on the data of the most popular Chinese microblog (Sina Weibo). The experimental results show that our method improves the performance significantly over other methods.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"24 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120934220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous extreme learning machines 异质极限学习机
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727400
J. J. Valdés
{"title":"Heterogeneous extreme learning machines","authors":"J. J. Valdés","doi":"10.1109/IJCNN.2016.7727400","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727400","url":null,"abstract":"The developments in communication, sensor and computing technologies are generating information at increasing rates and the nature of the data is becoming highly heterogeneous. Accordingly, the objects under study are described by collections of variables of very different kinds (e.g. numeric, non-numeric, images, signals, videos, documents, etc.) with different degrees of imprecision and incompleteness. Many data mining and machine learning methods do not handle heterogeneity well, requiring variables of the same type, information completeness (or imputation), also assuming no imprecision. Extreme learning machines (ELM) are very interesting computational algorithms because of their structural simplicity, their good performance and their speed. Accordingly, extending their scope by making them capable of processing heterogeneous information may increase their attractiveness as a modeling tool for addressing complex problems. ELMs are discussed in the context of heterogeneous data and approaches to build ELMs capable of performing classification and regression tasks in such cases are presented. Their performance is illustrated with real world examples of classification and regression involving heterogeneous information with scalar data described by nominal, ordinal, interval, ratio, and fuzzy variables as well as with entire empirical probability distributions as predictor variables.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127254511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Nearest Neighbour Search using binary neural networks 最近邻搜索使用二进制神经网络
2016 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2016-07-24 DOI: 10.1109/IJCNN.2016.7727873
Demetrio Ferro, Vincent Gripon, Xiaoran Jiang
{"title":"Nearest Neighbour Search using binary neural networks","authors":"Demetrio Ferro, Vincent Gripon, Xiaoran Jiang","doi":"10.1109/IJCNN.2016.7727873","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727873","url":null,"abstract":"The problem of finding nearest neighbours in terms of Euclidean distance, Hamming distance or other distance metric is a very common operation in computer vision and pattern recognition. In order to accelerate the search for the nearest neighbour in large collection datasets, many methods rely on the coarse-fine approach. In this paper we propose to combine Product Quantization (PQ) and binary neural associative memories to perform the coarse search. Our motivation lies in the fact that neural network dimensions of the representation associated with a set of k vectors is independent of k. We run experiments on TEXMEX SIFT1M and MNIST databases and observe significant improvements in terms of complexity of the search compared to raw PQ.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127353779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信