{"title":"Lighting Direction Estimation of a Shaded Image by a Surface-input Regression Network","authors":"C. Chow, S. Y. Yuen","doi":"10.1109/IJCNN.2007.4370955","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370955","url":null,"abstract":"In augmented reality (AR), the lighting direction plays an important role to the quality of the augmented scene. The corresponding lighting direction estimation is a challenging problem as it depends on an extra unknown variable -reflectance of the material. In this article, we propose to estimate the lighting direction by a neural network (NN) which is trained by a sample set. Since the empirical reflectance of a captured scene is in form of scattered points, we unify the representation of reflectance as a two dimensional polynomials. Moreover, a novel neural network model is presented to construct the mapping from reflectance to lighting direction. Contrary to the existing NNs, the proposed model accepts surface input pattern in which the drawbacks of feature vector are overcome. Experimental results of 2000 lighting estimations with unknown reflectances are presented to demonstrate the performance of the proposed algorithm.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"24 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128451853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Miao, Lijuan Duan, Laiyun Qing, Wen Gao, Yiqiang Chen
{"title":"Learning and Memory of Spatial Relationship by a Neural Network with Sparse Features","authors":"Jun Miao, Lijuan Duan, Laiyun Qing, Wen Gao, Yiqiang Chen","doi":"10.1109/IJCNN.2007.4371293","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371293","url":null,"abstract":"Research on efficiency of learning and memory is very important for theoretic exploration and practical application. This paper gives a discussion on learning and memory of spatial relationships between initial positions and object positions by a neural network with sparse features. As an example, the paper discusses how the neural network learns the visual contexts between human eye centers and random initial positions surrounding the eye centers in images with as little memory as possible. Some sparse features are designed and distances between initial positions and the labeled eye centers in horizontal and vertical directions are learned and memorized respectively. Such a system could predict object positions from a new initial position according to the contexts that the neural network learned. A group of experiments on efficiency of learning and memory with sparse features in several single and integrated scales are analyzed and discussed.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128606894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced Facilitatory Neuronal Dynamics for Delay Compensation","authors":"Jaerock Kwon, Y. Choe","doi":"10.1109/IJCNN.2007.4371272","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371272","url":null,"abstract":"Our earlier work has suggested that neuronal transmission delay may cause serious problems unless a compensation mechanism exists. In that work, facilitating neuronal dynamics was found to be effective in battling delay (the facilitating activation network model, or FAN). A systematic analysis showed that the previous FAN model has a subtle problem especially when high facilitation rates are used. We derived an improved facilitating dynamics at the neuronal level to overcome this limitation. In this paper, we tested our proposed approach in 2D pole balancing controllers, where it was shown to perform better than the previous FAN model. We also systematically tested the correlation between delay duration on the one hand and facilitation rate that effectively overcome the increasing delay on the other hand. Finally, we investigated the differential utilization of facilitating dynamics in sensory vs. motor neurons and found that motor neurons utilize the facilitating dynamics more than the sensory neurons. These findings are expected to help us better understand the role of facilitation in natural and artificial agents.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129041840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mostafa M. Hassan, A. Atiya, N. E. Gayar, R. El-Fouly
{"title":"Regression in the Presence Missing Data Using Ensemble Methods","authors":"Mostafa M. Hassan, A. Atiya, N. E. Gayar, R. El-Fouly","doi":"10.1109/IJCNN.2007.4371139","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371139","url":null,"abstract":"We consider the problem of missing data, and develop ensemble-network models for handling the missing data. The proposed method is based on utilizing the inherent uncertainty of the missing records in generating diverse training sets for the ensemble's networks. The proposed method is based on generating the missing values using their probability density. We repeat this procedure many time thereby creating several complete data sets. A network is trained for each of these data sets, therefore obtaining an ensemble of networks. Several variants are proposed, including the univariate approach and the multivariate approach, which differ in the way missing values are generated. Simulation results confirm the general superiority of the proposed methods compared to the conventional approaches.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129060837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Semi-supervised SVM with Genetic Algorithm","authors":"M. M. Adankon, M. Cheriet","doi":"10.1109/IJCNN.2007.4371235","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371235","url":null,"abstract":"Support vector machine (SVM) is an interesting classifier that has an excellent power of generalization. In this paper, we consider SVM in semi-supervised learning. We propose to use an additional criterion with the standard formulation of the transductive SVM for reinforcing the classifier regularization. Also, we use a genetic algorithm for optimizing the objective function, since the transductive SVM yields a non-convex problem. We tested our algorithm on artificial and real data, which gives promising results in comparison with Joachims' algorithm known as SVMlight TSVM.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124567641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Infrared Flame Detection System Using Multiple Neural Networks","authors":"J. Huseynov, S. Baliga, Alan Widmer, Z. Boger","doi":"10.1109/IJCNN.2007.4371026","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371026","url":null,"abstract":"A model for an infrared (IR) flame detection system using multiple artificial neural networks (ANN) is presented. The present work offers significant improvements over our previous design (Huseynov et al., 2005). Feature extraction only in the relevant frequency band using joint time-frequency analysis yields an input to a series of conjugate-gradient (CG) method-based ANNs. Each ANN is trained to distinguish all hydrocarbon flames from a particular type of environmental nuisance and ambient noise. Signal saturation caused by the increased intensity of IR sources at closer distances is resolved by adjustable gain control.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129882753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Input Variables Selection Using Mutual Information for Neuro Fuzzy Modeling with the Application to Time Series Forecasting","authors":"M. Yousefi, M. Mirmomeni, C. Lucas","doi":"10.1109/IJCNN.2007.4371115","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371115","url":null,"abstract":"This paper presents a methodology to select input variables for time series prediction. A main motivation is to find some proper input variables which describe the time series dynamics properly. It is shown that even when the choice of input variables is confined to the lagged values of the process to be predicted, a nonlinear analysis of the most significant factors is crucial for improving the prediction quality. The proposed method is used to select the appropriate input variables for neuro fuzzy models utilized for time series prediction benchmark in NN3 competition as well as a second benchmark to show the generality of the claims. Results depict the effectiveness of the proposed method in proper input selection for neuro fuzzy models for prediction task.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130413929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks","authors":"A. Sorjamaa, A. Lendasse","doi":"10.1109/IJCNN.2007.4371429","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371429","url":null,"abstract":"In this paper, time series prediction is considered as a problem of missing values. A method for the determination of the missing time series values is presented. The method is based on two projection methods: a nonlinear one (Self-Organized Maps) and a linear one (Empirical Orthogonal Functions). The presented global methodology combines the advantages of both methods to get accurate candidates for the prediction values. The methods are applied to two time series competition datasets.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"642 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123964203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. D. Carvalho, Julio T. Pimentel, Lucas X. T. Bezerra
{"title":"Clustering of symbolic interval data based on a single adaptive L1 distance","authors":"F. D. Carvalho, Julio T. Pimentel, Lucas X. T. Bezerra","doi":"10.1109/IJCNN.2007.4370959","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370959","url":null,"abstract":"The recording of symbolic interval data has become a common practice with the recent advances in database technologies. This paper introduces a dynamic clustering method to partitioning symbolic interval data. This method furnishes a partition and a prototype for each cluster by optimizing an adequacy criterion that measures the fitting between the clusters and their representatives. To compare symbolic interval data, the method uses a single adaptive L1 distance that at each iteration changes but is the same for all the clusters. Experiments with real and synthetic symbolic interval data sets showed the usefulness of the proposed method.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114234487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Near Optimal Output-Feedback Control of Nonlinear Discrete-time Systems in Nonstrict Feedback Form with Application to Engines","authors":"P. Shih, B. Kaul, S. Jagannathan, J. Drallmeier","doi":"10.1109/IJCNN.2007.4370989","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370989","url":null,"abstract":"A novel reinforcement-learning based output-adaptive neural network (NN) controller, also referred as the adaptive-critic NN controller, is developed to track a desired trajectory for a class of complex nonlinear discrete-time systems in the presence of bounded and unknown disturbances. The controller includes an observer for estimating states and the outputs, critic, and two action NNs for generating virtual, and actual control inputs. The critic approximates certain strategic utility function and the action NNs are used to minimize both the strategic utility function and their outputs. All NN weights adapt online towards minimization of a performance index, utilizing gradient-descent based rule. A Lyapunov function proves the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight, and observer estimation. Separation principle and certainty equivalence principles are relaxed; persistency of excitation condition and linear in the unknown parameter assumption is not needed. The performance of this controller is evaluated on a spark ignition (SI) engine operating with high exhaust gas recirculation (EGR) levels and experimental results are demonstrated.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116318143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}