{"title":"Building a Family of Neural Networks using Symmetry as a Foundation","authors":"R. Neville, Liping Zhao","doi":"10.1109/IJCNN.2007.4370922","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370922","url":null,"abstract":"In order to perform a function mapping task, a neural network needs two supporting mechanisms: an input and an output training vector, and a training regime. A new approach is proposed to generating a family of neural networks for performing a set of related functions. Within a family, only one network needs to be trained to perform an input-output function mapping task and other networks can be derived from this trained base network without training. The base net thus acts as a generator of the derived nets. The proposed approach builds on three mathematical foundations: (1) symmetry for defining the relationship between functions; (2) weight transformations for generating a family of networks; (3) Euclidian distance function for measuring the symmetric relationships between the related functions. The proposed approach provides a formal foundation for systemic information reuse in ANNs.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121612173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing SVR Hyperparameters via Fast Cross-Validation using AOSVR","authors":"Masayuki Karasuyama, R. Nakano","doi":"10.1109/IJCNN.2007.4371126","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371126","url":null,"abstract":"The performance of support vector regression (SVR) deeply depends on its hyperparameters such as an insensitive zone thickness, a penalty factor, and kernel parameters. A method called MCV-SVR was once proposed, which optimizes SVR hyperparameters so that cross-validation error is minimized. However, the computational cost of CV is usually high. In this paper we apply accurate online support vector regression (AOSVR) to the MCV-SVR cross-validation procedure. The AOSVR enables an efficient update of a trained SVR function when a sample is removed from training data. We show the AOSVR dramatically accelerates the MCV-SVR. Moreover, our experiments using real-world data showed our faster MCV-SVR has better generalization than other existing methods such as Bayesian SVR or practical setting.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114748173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text Representations for Text Categorization: A Case Study in Biomedical Domain","authors":"Man Lan, C. Tan, Jian Su, H. Low","doi":"10.1109/IJCNN.2007.4371361","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371361","url":null,"abstract":"In vector space model (VSM), textual documents are represented as vectors in the term space. Therefore, there are two issues in this representation, i.e. (1) what should a term be and (2) how to weight a term. This paper examined ways to represent text from the above two aspects to improve the performance of text categorization. Different representations have been evaluated using SVM on three biomedical corpora. The controlled experiments showed that the straightforward usage of named entities as terms in VSM does not show performance improvements over the bag-of-words representation. On the other hand, the term weighting method slightly improved the performance. However, to further improve the performance of text categorization, more advanced techniques and more effective usages of natural language processing for text representations appear needed.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124535809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Self-tuning Controller for Real-time Voltage Regulation","authors":"Weiming Li, Xiao-Hua Yu","doi":"10.1109/IJCNN.2007.4371267","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371267","url":null,"abstract":"In this research, a self-tuning controller based on multi-layer feed-forward neural network is developed for realtime output voltage regulation of a class of DC power supplies. The neural network based controller has the advantage of adaptive learning ability, and can work under the situations when the input voltage and load current fluctuate. Levenberg-Marquardt back-propagation training algorithm is used in computer simulation. The neural network controller is implemented and tested on hardware using a DSP (digital signal processor). Experimental results show that this neural network based approach outperforms the conventional analog controller, in terms of both line regulation and load regulation.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124074075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stability of Cohen-Grossberg Neural Networks with Nonnegative Periodic solutions","authors":"Tianping Chen, Yanchun Bai","doi":"10.1109/IJCNN.2007.4370962","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370962","url":null,"abstract":"In this paper, we discuss nonnegative periodic solutions for generalized Cohen-Grossberg neural networks. Without assuming strict positivity and boundedness of the amplification functions, the dynamics of periodic Cohen-Grossberg neural networks are studied. By applying a direct method, sufficient conditions guaranteeing the existence and global asymptotic stability of nonnegative periodic solution are derived. Also the criterion does not depend on the assumption for amplification functions being upper and low bounded or the external inputs.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127758908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using SOM to estimate optical inherent properties from remote sensing reflectance","authors":"A. Chazottes, M. Crépon, S. Thiria","doi":"10.1109/IJCNN.2007.4371416","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371416","url":null,"abstract":"This article presents a neural network classifier able to retrieve the optical properties of four ocean constituents from remote sensing reflectance. When comparing this model to some standard algorithms, we found that the neural network gives the best performances.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Ensembles of Neural Networks to Improve Automatic Relevance Determination","authors":"Yu Fu, A. Browne","doi":"10.1109/IJCNN.2007.4371195","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371195","url":null,"abstract":"Automatic relevance determination (ARD) is an efficient technique to infer the relevance of input features with respect to their ability to predict the target output for a task. ARD optimizes the hyperparameters to maximize the evidence. This optimization can cause some hyperparameters of relevant features tends towards infinity and therefore these features are inferred as irrelevant by an ARD model. The overfitting of relevance parameters cause feature relevance determinations to be not stable and reliable. Neural network ensemble methods can utilize the diversity between ensemble members to reduce the uncertainty in order to generate a more reliable determination of input feature relevancies. Input features were properly grouped based on their relevance level by ensemble relevance prediction.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125950178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concept Description - A Fresh Look","authors":"Cecilia Sönströd, U. Johansson","doi":"10.1109/IJCNN.2007.4371336","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371336","url":null,"abstract":"The main purpose of this paper is to look into the data mining task concept description, for which several rather different definitions exist. We argue for the definition used by CRISP-DM, where the overall goal is expressed as \"gaining insights\". Based on this, we propose that the two most important criteria for concept description models are accuracy and comprehensibility. The demand for comprehensibility rules out a straightforward use of many high-accuracy predictive modeling techniques; e.g. neural networks. Instead, we introduce rule extraction from predictive models as an alternative technique for concept description. In the experimentation, we show, using ten publicly available data sets, that the rule extractor used is clearly able to produce accurate and comprehensible descriptions. In addition, we discuss how concept description performance could be measured to capture both accuracy and comprehensibility. Comprehensibility is often translated into size; i.e. a smaller model is deemed more comprehensible. In practice, however, it would probably make more sense to treat comprehensibility as a binary property -the description is either comprehensible or not. Regarding accuracy, we argue that accuracies obtained on unseen data provide better information than accuracy on the entire data set. The reason is not that the model should be used for prediction, but that concepts found in this way are more likely to be general, and thus more informative.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131990406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural Network Deinterlacing Using Multiple Fields and Field-MSEs","authors":"Hyunsoo Choi, Chulhee Lee","doi":"10.1109/IJCNN.2007.4371072","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371072","url":null,"abstract":"Generally, deinterlacing algorithms can be either classified as intra methods or inter methods. Intra methods interpolate missing lines by using surrounding pixels in the current field. Inter methods interpolate missing lines by using pixels and the motion information of multiple fields. Neural network deinterlacing that uses multiple fields has been proposed. It provides improved performance compared to existing neural network deinterlacing algorithms that use a single field. However, when adjacent fields are very different, neural network deinterlacing that uses multiple fields may not provide good performance. To address this problem, we propose using field-MSE values as additional inputs. These MSE values can provide helpful information so that the networks can consider field differences in using multiple fields. Experimental results show that the use of the proposed algorithm results in improved performance.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134017544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Modified RBF Neural Network in Pattern Recognition","authors":"Min Han, Wei Guo, Yunfeng Mu","doi":"10.1109/IJCNN.2007.4371356","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371356","url":null,"abstract":"This paper presents a modified radial basis function (RBF) neural network for pattern recognition problems, which uses a hybrid learning algorithm to adaptively adjust the structure of the network. Two strategies are used to attain the compromise between the network complexity and accuracy, one is a modified \"novelty\" condition to create a new neuron in the hidden layer; the other is a pruning technique to remove redundant neurons and corresponding connections. To verify the performance of the modified network, two pattern recognition simulations are completed. One is a two-class pattern recognition problem, and the other is a real-world problem, internal component recognition in the field of architecture engineering. Simulation results including final hidden neurons, error, and accuracy using the method proposed in this paper are compared with performance of radial basis functional link network, resource allocating network and RBF neural network with generalized competitive learning algorithm. And it can be concluded that the proposed network has more concise architecture, higher classifier accuracy and fewer running time.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134124655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}