{"title":"Automatic detection of bike-riders without helmet using surveillance videos in real-time","authors":"Kunal Dahiya, Dinesh Singh, C. Mohan","doi":"10.1109/IJCNN.2016.7727586","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727586","url":null,"abstract":"In this paper, we propose an approach for automatic detection of bike-riders without helmet using surveillance videos in real time. The proposed approach first detects bike riders from surveillance video using background subtraction and object segmentation. Then it determines whether bike-rider is using a helmet or not using visual features and binary classifier. Also, we present a consolidation approach for violation reporting which helps in improving reliability of the proposed approach. In order to evaluate our approach, we have provided a performance comparison of three widely used feature representations namely histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), and local binary patterns (LBP) for classification. The experimental results show detection accuracy of 93.80% on the real world surveillance data. It has also been shown that proposed approach is computationally less expensive and performs in real-time with a processing time of 11.58 ms per frame.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116315512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep self-organizing reservoir computing model for visual object recognition","authors":"Zhidong Deng, Chengzhi Mao, Xiong Chen","doi":"10.1109/IJCNN.2016.7727351","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727351","url":null,"abstract":"Reservoir computing becomes increasingly a hot spot in recent years. In this paper, we propose a deep self-organizing reservoir computing model for visual object recognition. First, through combination of Kohonen's self-organizing map and SHESN network, we present a self-organizing SHESN (SO-SHESN). In the new model, we adopt the same mechanism of generating reservoir as SHESN, but McCulloch-Pitts type reservoir neuron is replaced with radial basis function neuron. Correspondingly, unsupervised competitive learning is exploited to train both input weights and reservoir weights of SO-SHESN. Second, we propose a deep SO-SHESN model through a stack of well-trained reservoir layers. In such a stacked structure, a novel trial-and-readout learning algorithm is used for pre-training of layer-wise reservoir, in which each layer is trained independently from each other. Finally, the experimental results obtained on MNIST benchmark dataset show that our SO-SHESN achieves the test recognition error rate of 5.66%, which improves classical ESN and SHESN by 6.44% and 1.74%, respectively. Furthermore, the test error rate of our deep SO-SHESN could reach up to 1.39%, which outperforms SO-SHESN with single reservoir layer by 4.27% and approximately approaches the state-of-the-art result of 1% among existing traditional machine learning approaches with non-CNN features.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126783829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accommodative neural filters","authors":"J. Lo, Yu Guo","doi":"10.1109/IJCNN.2016.7727490","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727490","url":null,"abstract":"By the fundamental neural filtering theorem, a properly trained recursive neural filter with fixed weights that processes only the measurement process generates recursively the conditional expectation of the signal process with respect to the joint probability distributions of the signal and measurement processes and any uncertain environmental process involved. This means that a recursive neural filter with fixed weights has the ability to adapt to the uncertain environmental parameter. The neural filter with this ability is called an accommodative neural filter. In this paper, we show that if the uncertain environmental process is observable from the measurement process, the accommodative neural filter outputs virtually the estimate of the signal process that would be generated by a non-adaptive minimal-variance filter as if the precise value of the uncertain environmental process were given. Numerical results comparing the accommodative neural filter and the existing non-adaptive filters each designed for a precise value of the environmental process confirm our theorem and show the advantages of the accommodative neural filter in both accuracy and efficiency.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126826294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forecasting wind power - an ensemble technique with gradual coopetitive weighting based on weather situation","authors":"André Gensler, B. Sick","doi":"10.1109/IJCNN.2016.7727855","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727855","url":null,"abstract":"The prediction of the power generation of wind farms is a non-trivial problem with increasing importance during the last decade due to the rapid increase of wind power generation in the power grid. The prediction task is commonly addressed using numerical weather predictions, statistical methods, or machine learning techniques. Various articles have shown that ensemble techniques for forecasting can yield better results regarding forecasting accuracy than single techniques alone. Typical ensembles make use of a parameter, or data diversity approach to build the models. In this article, we propose a novel ensemble technique using both, cooperative and competitive characteristics of ensembles to gradually adjust the influences of single forecasting algorithms in the ensemble based on their individual strengths using a “coopetitive” weighting formula. The observed quality of the models during training is used to adaptively weigh the models based on the location in the input data space (i.e., depending on the weather situation). We compute the overall weights for a particular weather situation using both, a spatial as well as a global weighting term. The experimental evaluation is performed on a data set consisting of data from 45 wind farms, which is made publicly available. We demonstrate that the technique is among the best performing algorithms compared to other state-of-the-art algorithms and ensembles. Furthermore, the practical applicability of the proposed technique is discussed.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126911836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal output synchronization of nonlinear multi-agent systems using approximate dynamic programming","authors":"H. Modares, F. Lewis, A. Davoudi","doi":"10.1109/IJCNN.2016.7727751","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727751","url":null,"abstract":"Optimal output synchronization of multi-agent leader-follower systems is considered. The agents are assumed heterogeneous so that the dynamics may be non-identical. An optimal control protocol is designed for each agent based on the leader state and the agent local state. A distributed observer is designed to provide the leader state for each agent. A model-free approximate dynamic programming algorithm is then developed to solve the optimal output synchronization problem online in real time. No knowledge of the agents' dynamics is required. The proposed approach does not require explicitly solving of the output regulator equations, though it implicitly solves them by imposing optimality. A simulation example verifies the suitability of the proposed approach.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127103397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Sánchez, P. Melin, Juan Martín Carpio Valadez, Héctor José Puga Soberanes
{"title":"A firefly algorithm for modular granular neural networks optimization applied to iris recognition","authors":"D. Sánchez, P. Melin, Juan Martín Carpio Valadez, Héctor José Puga Soberanes","doi":"10.1109/IJCNN.2016.7727191","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727191","url":null,"abstract":"In this paper a Modular Neural Network (MNN) with a granular approach optimization is proposed, where a firefly optimization is proposed to design a optimal MNN architecture. The proposed method can perform the optimization of some parameters such as; number of sub modules, percentage of information for the training phase and number of hidden layers (with their respective number of neurons) for each sub module. The proposed method is applied to human recognition based on iris biometrics. A benchmark database is used to prove the efficiency and effectiveness of the proposed method, using as objective function the minimization of the error of recognition.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127258779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"kNN ensembles with penalized DTW for multivariate time series imputation","authors":"Stefan Oehmcke, O. Zielinski, Oliver Kramer","doi":"10.1109/IJCNN.2016.7727549","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727549","url":null,"abstract":"The imputation of partially missing multivariate time series data is critical for its correct analysis. The biggest problems in time series data are consecutively missing values that would result in serious information loss if simply dropped from the dataset. To address this problem, we adapt the k-Nearest Neighbors algorithm in a novel way for multivariate time series imputation. The algorithm employs Dynamic Time Warping as distance metric instead of point-wise distance measurements. We preprocess the data with linear interpolation to create complete windows for Dynamic Time Warping. The algorithm derives global distance weights from the correlation between features and consecutively missing values are penalized by individual distance weights to reduce error transfer from linear interpolation. Finally, efficient ensemble methods improve the accuracy. Experimental results show accurate imputations on datasets with a high correlation between features. Further, our algorithm shows better results with consecutively missing values than state-of-the-art algorithms.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126399888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suwannee Phitakwinai, S. Auephanwiriyakul, N. Theera-Umpon
{"title":"Multilayer perceptron with Cuckoo search in water level prediction for flood forecasting","authors":"Suwannee Phitakwinai, S. Auephanwiriyakul, N. Theera-Umpon","doi":"10.1109/IJCNN.2016.7727243","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727243","url":null,"abstract":"The feed forward multilayer perceptron (MLP) with the Cuckoo search (CS) algorithm, called CS-MLP is implemented to predict 7-hours-ahead water level of the Ping river at the downtown area of Chiang Mai, Thailand. The CS-MLP model prediction performance is compared with the regular multilayer perceptron (MLP) and the results from the previous work. The CS-MLP is the best among them with the mean absolute error on the blind test data set of 6.836 cm.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121528077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ELM based multiple kernel k-means with diversity-induced regularization","authors":"Yang Zhao, Y. Dou, Xinwang Liu, Teng Li","doi":"10.1109/IJCNN.2016.7727538","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727538","url":null,"abstract":"Multiple-kernel k-means (MKKM) clustering has demonstrated good clustering performance by combining pre-specified kernels. In this paper, we argue that deep relationships within data and the complementary information among them can improve the performance of MKKM. To illustrate this idea, we propose a diversity-induced MKKM algorithm with extreme learning machine (ELM)-based feature extracting method. First, ELM, which has randomly chosen weights of hidden and output nodes, is applied to thoroughly extract features from data by generating different numbers of hidden nodes and using different functions. Second, an MKKM algorithm with diversity-induced regularization is utilized to explore the complementary information among kernels constructed from features. The problem could be solved efficiently by alternating optimization. Experimental results demonstrate that the proposed method outperforms state-of-the-art kernel methods.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129096896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient computation of the Levenberg-Marquardt algorithm for feedforward networks with linear outputs","authors":"P. Chazal, M. McDonnell","doi":"10.1109/IJCNN.2016.7727182","DOIUrl":"https://doi.org/10.1109/IJCNN.2016.7727182","url":null,"abstract":"An efficient algorithm for the calculation of the approximate Hessian matrix for the Levenberg-Marquardt (LM) optimization algorithm for training a single-hidden-layer feedforward network with linear outputs is presented. The algorithm avoids explicit calculation of the Jacobian matrix and computes the gradient vector and approximate Hessian matrix directly. It requires approximately 1/N the floating point operations of other published algorithms, where N is the number of network outputs. The required memory for the algorithm is also less than 1/N of the memory required for algorithms explicitly computing the Jacobian matrix. We applied our algorithm to two large-scale classification problems - the MNIST and the Forest Cover Type databases. Our results were within 0.5% of the best performance of systems using pixel values as inputs to a feedforward network for the MNIST database. Our results were achieved with a much smaller network than other published results. We achieved state-of-the-art performance for the Forest Cover Type database.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"632 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132755043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}