2007 International Joint Conference on Neural Networks最新文献

筛选
英文 中文
Building a Family of Neural Networks using Symmetry as a Foundation 以对称为基础构建神经网络家族
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4370922
R. Neville, Liping Zhao
{"title":"Building a Family of Neural Networks using Symmetry as a Foundation","authors":"R. Neville, Liping Zhao","doi":"10.1109/IJCNN.2007.4370922","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370922","url":null,"abstract":"In order to perform a function mapping task, a neural network needs two supporting mechanisms: an input and an output training vector, and a training regime. A new approach is proposed to generating a family of neural networks for performing a set of related functions. Within a family, only one network needs to be trained to perform an input-output function mapping task and other networks can be derived from this trained base network without training. The base net thus acts as a generator of the derived nets. The proposed approach builds on three mathematical foundations: (1) symmetry for defining the relationship between functions; (2) weight transformations for generating a family of networks; (3) Euclidian distance function for measuring the symmetric relationships between the related functions. The proposed approach provides a formal foundation for systemic information reuse in ANNs.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121612173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing SVR Hyperparameters via Fast Cross-Validation using AOSVR 基于AOSVR的SVR超参数快速交叉验证优化
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371126
Masayuki Karasuyama, R. Nakano
{"title":"Optimizing SVR Hyperparameters via Fast Cross-Validation using AOSVR","authors":"Masayuki Karasuyama, R. Nakano","doi":"10.1109/IJCNN.2007.4371126","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371126","url":null,"abstract":"The performance of support vector regression (SVR) deeply depends on its hyperparameters such as an insensitive zone thickness, a penalty factor, and kernel parameters. A method called MCV-SVR was once proposed, which optimizes SVR hyperparameters so that cross-validation error is minimized. However, the computational cost of CV is usually high. In this paper we apply accurate online support vector regression (AOSVR) to the MCV-SVR cross-validation procedure. The AOSVR enables an efficient update of a trained SVR function when a sample is removed from training data. We show the AOSVR dramatically accelerates the MCV-SVR. Moreover, our experiments using real-world data showed our faster MCV-SVR has better generalization than other existing methods such as Bayesian SVR or practical setting.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114748173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Text Representations for Text Categorization: A Case Study in Biomedical Domain 用于文本分类的文本表示:生物医学领域的案例研究
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371361
Man Lan, C. Tan, Jian Su, H. Low
{"title":"Text Representations for Text Categorization: A Case Study in Biomedical Domain","authors":"Man Lan, C. Tan, Jian Su, H. Low","doi":"10.1109/IJCNN.2007.4371361","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371361","url":null,"abstract":"In vector space model (VSM), textual documents are represented as vectors in the term space. Therefore, there are two issues in this representation, i.e. (1) what should a term be and (2) how to weight a term. This paper examined ways to represent text from the above two aspects to improve the performance of text categorization. Different representations have been evaluated using SVM on three biomedical corpora. The controlled experiments showed that the straightforward usage of named entities as terms in VSM does not show performance improvements over the bag-of-words representation. On the other hand, the term weighting method slightly improved the performance. However, to further improve the performance of text categorization, more advanced techniques and more effective usages of natural language processing for text representations appear needed.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124535809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A Self-tuning Controller for Real-time Voltage Regulation 一种用于实时电压调节的自调谐控制器
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371267
Weiming Li, Xiao-Hua Yu
{"title":"A Self-tuning Controller for Real-time Voltage Regulation","authors":"Weiming Li, Xiao-Hua Yu","doi":"10.1109/IJCNN.2007.4371267","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371267","url":null,"abstract":"In this research, a self-tuning controller based on multi-layer feed-forward neural network is developed for realtime output voltage regulation of a class of DC power supplies. The neural network based controller has the advantage of adaptive learning ability, and can work under the situations when the input voltage and load current fluctuate. Levenberg-Marquardt back-propagation training algorithm is used in computer simulation. The neural network controller is implemented and tested on hardware using a DSP (digital signal processor). Experimental results show that this neural network based approach outperforms the conventional analog controller, in terms of both line regulation and load regulation.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124074075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stability of Cohen-Grossberg Neural Networks with Nonnegative Periodic solutions 具有非负周期解的Cohen-Grossberg神经网络的稳定性
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4370962
Tianping Chen, Yanchun Bai
{"title":"Stability of Cohen-Grossberg Neural Networks with Nonnegative Periodic solutions","authors":"Tianping Chen, Yanchun Bai","doi":"10.1109/IJCNN.2007.4370962","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4370962","url":null,"abstract":"In this paper, we discuss nonnegative periodic solutions for generalized Cohen-Grossberg neural networks. Without assuming strict positivity and boundedness of the amplification functions, the dynamics of periodic Cohen-Grossberg neural networks are studied. By applying a direct method, sufficient conditions guaranteeing the existence and global asymptotic stability of nonnegative periodic solution are derived. Also the criterion does not depend on the assumption for amplification functions being upper and low bounded or the external inputs.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127758908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using SOM to estimate optical inherent properties from remote sensing reflectance 利用SOM从遥感反射率估计光学固有特性
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371416
A. Chazottes, M. Crépon, S. Thiria
{"title":"Using SOM to estimate optical inherent properties from remote sensing reflectance","authors":"A. Chazottes, M. Crépon, S. Thiria","doi":"10.1109/IJCNN.2007.4371416","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371416","url":null,"abstract":"This article presents a neural network classifier able to retrieve the optical properties of four ocean constituents from remote sensing reflectance. When comparing this model to some standard algorithms, we found that the neural network gives the best performances.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Ensembles of Neural Networks to Improve Automatic Relevance Determination 利用神经网络集成改进自动关联确定
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371195
Yu Fu, A. Browne
{"title":"Using Ensembles of Neural Networks to Improve Automatic Relevance Determination","authors":"Yu Fu, A. Browne","doi":"10.1109/IJCNN.2007.4371195","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371195","url":null,"abstract":"Automatic relevance determination (ARD) is an efficient technique to infer the relevance of input features with respect to their ability to predict the target output for a task. ARD optimizes the hyperparameters to maximize the evidence. This optimization can cause some hyperparameters of relevant features tends towards infinity and therefore these features are inferred as irrelevant by an ARD model. The overfitting of relevance parameters cause feature relevance determinations to be not stable and reliable. Neural network ensemble methods can utilize the diversity between ensemble members to reduce the uncertainty in order to generate a more reliable determination of input feature relevancies. Input features were properly grouped based on their relevance level by ensemble relevance prediction.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125950178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Concept Description - A Fresh Look 概念描述-一个新鲜的外观
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371336
Cecilia Sönströd, U. Johansson
{"title":"Concept Description - A Fresh Look","authors":"Cecilia Sönströd, U. Johansson","doi":"10.1109/IJCNN.2007.4371336","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371336","url":null,"abstract":"The main purpose of this paper is to look into the data mining task concept description, for which several rather different definitions exist. We argue for the definition used by CRISP-DM, where the overall goal is expressed as \"gaining insights\". Based on this, we propose that the two most important criteria for concept description models are accuracy and comprehensibility. The demand for comprehensibility rules out a straightforward use of many high-accuracy predictive modeling techniques; e.g. neural networks. Instead, we introduce rule extraction from predictive models as an alternative technique for concept description. In the experimentation, we show, using ten publicly available data sets, that the rule extractor used is clearly able to produce accurate and comprehensible descriptions. In addition, we discuss how concept description performance could be measured to capture both accuracy and comprehensibility. Comprehensibility is often translated into size; i.e. a smaller model is deemed more comprehensible. In practice, however, it would probably make more sense to treat comprehensibility as a binary property -the description is either comprehensible or not. Regarding accuracy, we argue that accuracies obtained on unseen data provide better information than accuracy on the entire data set. The reason is not that the model should be used for prediction, but that concepts found in this way are more likely to be general, and thus more informative.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131990406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural Network Deinterlacing Using Multiple Fields and Field-MSEs 基于多场和场均方的神经网络去隔行
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371072
Hyunsoo Choi, Chulhee Lee
{"title":"Neural Network Deinterlacing Using Multiple Fields and Field-MSEs","authors":"Hyunsoo Choi, Chulhee Lee","doi":"10.1109/IJCNN.2007.4371072","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371072","url":null,"abstract":"Generally, deinterlacing algorithms can be either classified as intra methods or inter methods. Intra methods interpolate missing lines by using surrounding pixels in the current field. Inter methods interpolate missing lines by using pixels and the motion information of multiple fields. Neural network deinterlacing that uses multiple fields has been proposed. It provides improved performance compared to existing neural network deinterlacing algorithms that use a single field. However, when adjacent fields are very different, neural network deinterlacing that uses multiple fields may not provide good performance. To address this problem, we propose using field-MSE values as additional inputs. These MSE values can provide helpful information so that the networks can consider field differences in using multiple fields. Experimental results show that the use of the proposed algorithm results in improved performance.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134017544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Modified RBF Neural Network in Pattern Recognition 一种用于模式识别的改进RBF神经网络
2007 International Joint Conference on Neural Networks Pub Date : 2007-10-29 DOI: 10.1109/IJCNN.2007.4371356
Min Han, Wei Guo, Yunfeng Mu
{"title":"A Modified RBF Neural Network in Pattern Recognition","authors":"Min Han, Wei Guo, Yunfeng Mu","doi":"10.1109/IJCNN.2007.4371356","DOIUrl":"https://doi.org/10.1109/IJCNN.2007.4371356","url":null,"abstract":"This paper presents a modified radial basis function (RBF) neural network for pattern recognition problems, which uses a hybrid learning algorithm to adaptively adjust the structure of the network. Two strategies are used to attain the compromise between the network complexity and accuracy, one is a modified \"novelty\" condition to create a new neuron in the hidden layer; the other is a pruning technique to remove redundant neurons and corresponding connections. To verify the performance of the modified network, two pattern recognition simulations are completed. One is a two-class pattern recognition problem, and the other is a real-world problem, internal component recognition in the field of architecture engineering. Simulation results including final hidden neurons, error, and accuracy using the method proposed in this paper are compared with performance of radial basis functional link network, resource allocating network and RBF neural network with generalized competitive learning algorithm. And it can be concluded that the proposed network has more concise architecture, higher classifier accuracy and fewer running time.","PeriodicalId":350091,"journal":{"name":"2007 International Joint Conference on Neural Networks","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134124655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信