{"title":"Aircraft sensor estimation for fault tolerant flight control system using fully connected cascade neural network","authors":"Saed Hussain, M. Mokhtar, J. Howe","doi":"10.1109/IJCNN.2013.6706763","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706763","url":null,"abstract":"Flight control systems that are tolerant to failures can increase the endurance of an aircraft in case of a failure. The two major types of failure are sensor and actuator failures. This paper focuses on the failure of the gyro sensors in an aircraft. The neuron by neuron (NBN) learning algorithm, which is an improved version of the Levenberg-Marquardt (LM) algorithm, is combined with the fully connected cascade (FCC) neural network architecture to estimate an aircraft's sensor measurements. Compared to other neural networks and learning algorithms, this combination can produce good sensor estimates with relatively few neurons. The estimators are developed and evaluated using flight data collected from the X-Plane flight simulator. The developed sensor estimators can replicate a sensor's measurements with as little as 2 neurons. The results reflect the combined power of the NBN algorithm and the FCC neural network architecture.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125293416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constructing mobile-oriented catalog in m-commerce using LDA-based self-adaptive genetic algorithm","authors":"Hung-Min Hsu, R. Chang, Jan-Ming Ho","doi":"10.1109/IJCNN.2013.6707112","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707112","url":null,"abstract":"The purpose of this paper is to develop a method to recommend products to customer via mobile devices. Collaborative recommendation is known as an effective way to recommend products. In this paper, we use the concept of collaborative recommendation to develop Mobile-Oriented Catalog (MOC). The proposed method is made from aggregating similar purchasing records to optimize combination of goods on mobile devices. This paper illustrates how to design attractive and collaborative catalog to recommend items by using Latent Dirichlet Allocation (LDA) based self-adaptive genetic algorithm (LDA-SAGA). LDA-SAGA is consisted of topic modeling concept and self-adaptive genetic algorithm. We use LDA as our topic modeling algorithm to construct MOC as a result that it is the simplest topic model. Our experimental evaluation on synthetic and real data shows that using preference as topic concept is effective. LDA-SAGA is especially outstanding with large number of customers and products. Finally, we compare the MOC which is used on mobile application (APP) of Amazon with the one used on Taobao and discuss the characteristics of their design. Different design of user interface on APP can lead to different scope of fitness value which is capable of explaining different market strategies of Taobao and Amazon.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122465197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Pedroni, Srinjoy Das, E. Neftci, K. Kreutz-Delgado, G. Cauwenberghs
{"title":"Neuromorphic adaptations of restricted Boltzmann machines and deep belief networks","authors":"B. Pedroni, Srinjoy Das, E. Neftci, K. Kreutz-Delgado, G. Cauwenberghs","doi":"10.1109/IJCNN.2013.6707067","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707067","url":null,"abstract":"Restricted Boltzmann Machines (RBMs) and Deep Belief Networks (DBNs) have been demonstrated to perform efficiently on a variety of applications, such as dimensionality reduction and classification. Implementation of RBMs on neuromorphic platforms, which emulate large-scale networks of spiking neurons, has significant advantages from concurrency and low-power perspectives. This work outlines a neuromorphic adaptation of the RBM, which uses a recently proposed neural sampling algorithm (Buesing et al. 2011), and examines its algorithmic efficiency. Results show the feasibility of such alterations, which will serve as a guide for future implementation of such algorithms in neuromorphic very large scale integration (VLSI) platforms.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122993067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masatoshi Minakawa, B. Raytchev, Toru Tamaki, K. Kaneda
{"title":"Image sequence recognition with active learning using uncertainty sampling","authors":"Masatoshi Minakawa, B. Raytchev, Toru Tamaki, K. Kaneda","doi":"10.1109/IJCNN.2013.6707060","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707060","url":null,"abstract":"In this paper we consider the case when huge datasets need to be labeled efficiently for learning. It is assumed that the data can be naturally organized into many small groups, called chunklets, each one of which contains data from the same class, and many chunklets are available from each class. Each chunklet exhibits some of the typical variation representative for the class. We investigate how active learning methods based on uncertainty sampling perform in this setting, and whether any gains can be expected in comparison with random sampling. We also propose a novel strategy for selecting which chunklets to be selected for labeling. Experiments with 7containing variation in pose, expression and illumination conditions illustrate the proposed method.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131440540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimizing validation error with respect to network size and number of training epochs","authors":"Rohit Rawat, Jignesh K. Patel, M. Manry","doi":"10.1109/IJCNN.2013.6706919","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706919","url":null,"abstract":"A batch training algorithm for the multilayer perceptron is developed that optimizes validation error with respect to two parameters. At the end of each training epoch, the method temporarily prunes the network and calculates the validation error versus number of hidden units curve in one pass through the validation data. Since, pruning is done at each epoch, and the best networks are saved, we optimize validation error over the number of hidden units and the number of epochs simultaneously. The number of required multiplies for the algorithm has been analyzed. The method has been compared to others in simulations and has been found to work very well.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131559035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Songyun Xie, Fangshi Zhu, K. Obermayer, P. Ritter, Linan Wang
{"title":"A spatial selective visual attention pattern recognition method based on joint short SSVEP","authors":"Songyun Xie, Fangshi Zhu, K. Obermayer, P. Ritter, Linan Wang","doi":"10.1109/IJCNN.2013.6706872","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706872","url":null,"abstract":"Spatial selective attention pattern recognition plays a significant role in specific people's (e.g.: pilot's) state monitoring. Steady-State Visual Evoked Potentials (SSVEP) were recorded from the scalp of 6 subjects who were cued to attend to a flickering sequence displayed in one visual field while ignoring a similar one with a different flickering rate in the opposite field. The SSVEP to either flickering stimulus was enhanced when attention was lead to the same direction rather than to the opposite direction. The most significant enlargement is generally located on the posterior scalp contralateral to the visual field of stimulation. This attention-caused amplitude enhancement of SSVEP can be used to measure the attention shifting. In this paper, we developed an algorithm to extract short SSVEP, selectively combine them to form a joint temporal spatial selective attention feature, and use Support Vector Machine (SVM) to classify different attention pattern joint features. By segmenting the long single trial SSVEP (12s) data into short pieces (1s), we are able to largely decrease the training time while still keeping a high recognition accuracy (>93%) for most subjects, which makes it possible to monitor spatial selective attention on time.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131609821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variational learning of finite Beta-Liouville mixture models using component splitting","authors":"Wentao Fan, N. Bouguila","doi":"10.1109/IJCNN.2013.6707025","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707025","url":null,"abstract":"Recently, finite Beta-Liouville mixture models have proved to be an effective and powerful knowledge representation and inference engine in several machine learning and data mining applications. In this paper, we propose a component splitting and local model selection method to address the problem of learning and selecting finite Beta-Liouville mixture models in an incremental variational way. Within the proposed principled variational learning framework, all the involved parameters and model complexity (i.e. the number of mixture components) can be estimated simultaneously in a closed-form. We demonstrate the effectiveness of the proposed approach through both synthetic data as well as two challenging real-world applications namely human activities modeling and recognition, and facial expressions recognition.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121272133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting and labeling representative nodes for network-based semi-supervised learning","authors":"Bilzã Araújo, Liang Zhao","doi":"10.1109/IJCNN.2013.6706948","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706948","url":null,"abstract":"Network-based Semi-Supervised Learning (NBSSL) propagates labels in networks constructed from the original vector-based data sets taking advantage of the network topology. However, the NBSSL classification performance often varies according to the representativeness of the labeled data instances. Herein, we address this issue. We adopt heuristic criteria for selecting data items for manual labeling based on complex networks centrality measures. The numerical analysis are performed on Girvan and Newman homogeneous networks and Lancichinetti-Fortunato-Radicchi heterogeneous networks. Counterintuitively, we found that the highly connective nodes (hubs) are usually not representative, in the sense that random samples performs as well as them or even better. Other than expected, nodes with high clustering coefficient are good representatives of the data in homogeneous networks. On the other hand, in heterogeneous networks, nodes with high betweenness are the good representatives. A high clustering coefficient means that the node lies in a much connected motif (clique) and a high betweenness means that the node lies interconnecting modular structures. Moreover, aggregating the complex networks measures through Principal Components Analysis, we observed that the second principal component (Z2) exhibits potentially promising properties. It appears that Z2 is able to extract discriminative characteristics allowing finding good representatives of the data. Our results reveal that the performance of the NBSSL can be significantly improved by finding and labeling representative data instances.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127789034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Murguia, Graciela Ramírez Alonso, Sergio Gonzalez-Duarte
{"title":"Improvement of a neural-fuzzy motion detection vision model for complex scenario conditions","authors":"M. Murguia, Graciela Ramírez Alonso, Sergio Gonzalez-Duarte","doi":"10.1109/IJCNN.2013.6706734","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706734","url":null,"abstract":"Motion detection represents a challenging issue in artificial vision systems. Besides detection of movement in normal scenario conditions robust systems must deal with other non-normal conditions. We propose the improvement of a former neuro-fuzzy motion detection method to face drastic illumination changes, gradual illumination conditions, moving background and scene composition changes. The improvements include adaptive learning rates as well as the inclusion of new fuzzy rules. Experimental findings over several video sequences verify that the improvements outperform the performance of the original method in the non-normal conditions without affecting the performance under normal conditions.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134511927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fang Yu, Shin-Ying Huang, Li-ching Chiou, R. Tsaih
{"title":"Clustering iOS executable using self-organizing maps","authors":"Fang Yu, Shin-Ying Huang, Li-ching Chiou, R. Tsaih","doi":"10.1109/IJCNN.2013.6706728","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706728","url":null,"abstract":"We pioneer the study on applying both SOMs and GHSOMs to cluster mobile apps based on their behaviors, showing that the SOM family works well for clustering samples with more than ten thousands of attributes. The behaviors of apps are characterized by system method calls that are embedded in their executable, but may not be perceived by users. In the data preprocessing stage, we propose a novel static binary analysis to resolve and count implicit system method calls of iOS executable. Since an app can make thousands of system method calls, it is needed a large dimension of attributes to model their behaviors faithfully. On collecting 115 apps directly downloaded from Apple app store, the analysis result shows that each app sample is represented with 18000+ kinds of methods as their attributes. Theoretically, such a sample representation with more than ten thousand attributes raises a challenge to traditional clustering mechanisms. However, our experimental result shows that apps that have similar behaviors (due to having been developed from the same company or providing similar services) can be clustered together via both SOMs and GHSOMs.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133120535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}